Time to Divorce the Car…

We have gone far beyond a love affair with the car, the motor car, the automobile, we are beyond the comfortable love of wedded bliss and we have reached the time when divorce must be considered, with all the compromises and new freedoms that implies…

The Love Affair

And it was a Love affair to begin with! Admittedly, one that could only be pursued by the rich – the freedom given to those Edwardian pioneers, to go further than on horseback, to travel in a group, and oh! the speed – think of Toad in The Wind in the Willows – “Poop, poop!”! Then came The First World War with lorries and ambulances. The interwar years when cars spread to the middle classes and finally, after The Second World War, we were truly wedded to the car, cars for all but the poorest and our world and lives changing beyond recognition.

Urban Sprawl and the Escape to the Country…

How were we seduced into these changes and what form did they take? It’s a tale of city and countryside, of congestion and the freedom of “the open road”. We live, mostly, in cities and towns but we dream, most of us, of living in the country. How many people drive 4-wheel drive monsters that were originally designed for country-dwelling landowners but became must-have status symbols of the urban elite – Chelsea Tractors as they are known – how many of those owners ever go off-road or ever use their 4-wheel drive? Incidentally, these huge cars, often justified as being safer for your family (if you can afford them)- take an unjustified amount of both materials and energy to construct, so that’s a very un-Green and inequitable contribution to Climate Change right there!

Advertising campaigns focus on a romantic image of
the SUV as the car of the great outdoors.

Levelling Up

Recent changes in internet technology and in working practices stimulated by the Covid lockdown, mean that we can work wherever we want to – from home or at the office, the office usually being in the city and the home wherever we choose or can afford. But in this respect, those who live in the country, are still disadvantaged since the high-speed optical transmission available in cities, has not always yet reached the countryside. There is a clamour from rural populations for a levelling up so that the commuters, enabled post-war, by the motor car to live outside the cities, and commute in to work, may not be disadvantaged by slower broadband, that they may take full advantage of living in the country.

There are other things not available in the rural areas, such as supermarkets – another phenomenon that grew with the car, because even if you live in the city, the supermarket, unlike the corner shop, is likely to be some distance away from home and how else are you to manage to get a weekly shop home without a car? Food, DIY, furnishings, and any kind of specialist shops are in the urban environment and unavailable out in the country – at least if you want to touch and feel goods – because we now have online shopping. Assuming your broadband is fast enough for shopping online, you can order things online, have them delivered (by motor vehicle – anything from a lorry to a private car) and if it does not match the expectation (that touch and feel would have obviated) you can send it back, by vehicle again – still it’s better to have one vehicle doing a round of rural locations than dozens of rural dwellers motoring into town and back… This leads to other, less obvious environmental impacts – clothing retailers are very poor at standardising garment sizes so consumers may order a minimum of three items to bracket their size and having tried them on, return two of them, which the retailer cannot be bothered to repack and resell – best case they go to a third-party retailer like TK Maxx – worst case, they go to landfill, still in their torn-open plastic bags. 92 million tons of clothing end up in landfills. Only 20% of textiles are collected for reuse or recycling globally.

The Suburban Deserts…

It is not just country- dwellers who are dependent on cars – as the car became more widely owned between the war – ribbon development spread out along the arterial roads stretching out from the cities towards those aspirational countryside locations and post-Second World War, the spaces between those roads were infilled with suburban housing estates. These estates were not provided with shops or pubs and buses –  buses still modelled on earlier urban trams, did not, on the whole, extend their routes into these estates (as they do now, using smaller buses) – so another “driver” towards mass ownership of the car. There is now guidance on how to design housing estates in order to be bus-friendly but it depends on local authorities being prepared to press this home.

-and “Liberation”

Once you had a car, many other uses for it became apparent and were catered to, holidays at the coast demanded an ever-improving road network and the upgrading of A-roads to motorways and with car ferries and later, the Euro-Tunnel, motorists could even take their cars abroad if only to stock up with beer and fags on a day trip to France. How much was saved when the petrol was factored in, I wonder? Trips to the dump, car-boot sales, lazy trips to the corner shop, driving the kids to school, sometimes even when the distance is walkable and the exercise would do both kids and parents good, Z-cars (car crime demands police cars), driving to watch motor racing, banger racing, picnics in the rain, dogging, actually making babies, babies being born and too many people dying in cars– and so it goes on – the love affair was over and we became truly wedded to our cars.

Divorce

Divorce is going to be painful and it is going to have to happen, because there can no longer be any doubt that climate change is happening and happening faster and more damagingly than we imagined possible. The time for tinkering about with lightbulb replacement is past, we must all take more serious steps towards doing our bit to arrest the worsening problems affecting rich and poor countries alike, though the effect on poor countries is worse and the contributory factors of any kind of mitigation lie with the rich ones. Changing lightbulbs was, initially, a costly outlay because the energy-efficient bulbs were not cheap, but then came LED and suddenly lighting fixtures themselves changed and every public building must change colour constantly at night. However, even changing lightbulbs was a struggle to sell to people initially – because of the cost – even if the long-term effect was a saving – it was cash upfront and savings later and we humans are not good at deferred gratification. If lightbulbs were hard, how much harder will it be to make serious changes in our car usage? No wedded person wants to get divorced, not least because of the massive cost both financial and emotional and this divorce is no different – when you blow up a life together, it’s hard to believe there might even be any upside, or be able to see beyond the chaos to a new and settled life…

Like a divorce, changing our relationship with the car is not going to be a single, simple matter, no more than you can change housing, separate finances, account for custody of children, divvy up the record collection, who gets the dog, who keeps the friends, let alone the emotional toll, good breakups, the bad breakups, the goodbye sex, the loss of support and confidant –  divorce is messy and the solutions are multi-faceted and often unique to each couple and it will be no different with the car. Going electric is only a part of the story and presently, a dubious part at that – so let’s examine some of the many parts that might contribute to the solution.

Social Changes

There are two avenues to be considered – the technological and the behavioural and neither is easy or straightforward. Taking the social first, there have been studies showing that already, some people would save money by ditching their own cars and simply hiring a car for those occasions that they need them – holidays, weddings, big shopping trips but many would question the use of the word “simply” because nothing is without any cost – yes you would need to plan ahead and make a booking but is that really so onerous? Yes – you would no longer have a status symbol that you feel reflects something about you sitting on your drive and some people might think you “gone down in the world” for leaving the ranks of the car owners – but then again, many would applaud you…

Good Government – Hands On or Hands Off?

Another example of the need for social attitude change is the term “Public Transport” which now carries a whiff of class distinction – let us rechristen it as Green Transport or Make a Friend Transport – for in the words of a British railway advert – “Let the Train Take the Strain!” – I can vouch for this as after breaking hip in a car crash, I was unable to drive for at least a year and enjoyed travelling by bus to my nearest city, some 35 miles away so much, that I didn’t return to driving a car for five years. Not only was I able to more fully enjoy the beautiful scenery where I lived back then, but the bus afforded me the opportunity to meet people I would otherwise not chanced to talk to… By contrast, the Conservative party here in the UK, took great pride in announcing via posters at railway stations, how the UK had the lowest subsidy for railways of any country, however the list that the poster detailed, might have been the list of railway operators from best to worst, with the UK at the bottom. The Conservatives are the party of the car and they also adopted, as part of Neoliberalism, the idea that for governments to govern – to make considered decisions about what works best for the country and nudge or even legislate for it – is a bad thing and that market forces (read unabashed greed and profiteering) should determine the course of things. They sold off (euphemism – De-nationalised) British Rail, breaking it up into a series of franchises operated by the highest bidder and disastrously, made the rails themselves, the purview of another company. Operating for profit almost always means cutting corners and this, plus the disjuncture between infrastructure and operators, contributed, in large part to many incidents, the worst of which was the Reading Station crash in which six people died and sixty-one were injured.  Major rail crashes also occurred at Southall, Ladbroke Grove, Hatfield, Potters Bar and Stonehaven.

There are already schemes bringing commuters together such as Car Pooling where people who work together, or in nearby home/job locations, share car and/or petrol expenses – what if we took this idea further wherein people within a given neighbourhood would belong to a co-owned carpool and could book a car suitable for “the day that’s in it “– a small runaround for taking and collecting the kids from school or doing the weekly shop and an estate car for that annual holiday – perhaps even a sports car to impress on that first date… There might need to be changes to insurance rules to facilitate such schemes but it’s not insuperable. In Canada, the distances to be travelled are vast and although it is illegal to hitchhike anywhere except at a petrol station, it is common to accept passengers who pay their share of the petrol. All these ideas are facilitated by online connections – either informal or actual websites, and this demonstrates both that people are willing to co-operate for a change benefitting the environment, even if there are some small inconveniences in co-ordinating themselves and secondly that these things, once mooted, can be organised from the grassroots up – with a minimum of governmental interference but perhaps the odd facilitation.

The Lessons of Oxford

I grew up in Oxford and would naturally have ridden a bicycle as soon as I was old enough even if I hadn’t had a father who became a crusader for “Intermediate Technology” (simple rather than high-tech solutions). I never learned to drive until I was 35 and had stopped living in London where I continued to ride a bicycle when possible or took public transport – busses or trains if the weather was inclement. If more people took to bikes and public transport, there would be fewer cars on the road and the benefits would be safer cycling, healthier bodies as well as the immediate environmental benefits – think Amsterdam or Oxford. Oxford has embraced and been the testbed for every measure to try and reign in the motor car. First it built a Ring Road or Bypass coupled with creating one-way streets that made it less attractive to go through the centre. Far from killing commerce in towns, many places have discovered that a more car-free city centre is such a boon that it promotes shopping, eating and sightseeing! Next, Oxford introduced bus lanes (which also facilitate bicycles and taxis) and shortly after, Park-and-Ride schemes. The latest scheme in Oxford has however, brought a backlash, not exclusively from those immediately affected (or benefitted, depending on your point of view) but as a rallying point for car defenders from far and wide who see Oxford’s latest experiment as the thin edge of the wedge – the enforcing of automotive divorce. What is this dastardly attack on the institution of marriage to the car? It stems from the idea of the “Fifteen Minute City” That every city be divided into neighbourhoods within which all the essential needs for living – shops, pubs, meeting places, bus routes out – should all be found within a fifteen-minute walking distance – innocuous you might think, but Oxford’s implementation – a retrospective planning measure, is to ring-fence such neighbourhoods with car-barriers to limit ingress and egress for cars – and this has been seen by some as a totally unwarranted attack, not just on car owners, but on their very human rights! Protesters were not merely local residents but car supporters from far and wide who flocked to oppose the dastardly council who dared to challenge the rights of the motorist! In our necessary divorce from the motor car – a front line has been drawn… True, such cellular enclaves have been devised before by town planners – anyone who has visited a friend in the suburbs of that planned city, Milton Keynes, will know the nightmare of entering such a cell and then not being able to find the way out. Not least, this is because the suburban cells of Milton Keynes were designed without the shops and facilities that would have made them into 15-minute solutions but would also have provided landmarks for navigation in a sea of identical housing. Planning for the future means making sure that housing estates are more than just housing, that the cars are parked or garaged around the outside of neighbourhoods, or at least, like the mews coach-houses of old – at the back of the houses with pedestrian thoroughfares leading to shops et al, at the front. Private developers are particularly notorious for neglecting to build any community facilities and merely cramming as many houses with concomitant roads, drives and garages as it takes to carry the occupants out of their housing only ghetto. Reto-fitting is, as Oxford has discovered, more challenging still…

Technological Options

Turning to the technological options for environmental solutions to divorce us from the CO2 emitting motor cars, vans and lorries – “What of the electric vehicle?” I hear you say. Setting aside the fact that you still have to generate the electricity to power electric vehicles and all the difficulties in weaning power generation off fossil fuels, there are major flaws in the current approach to electric cars.

The current generation of electric cars is predicated on the idea that people want a car that does exactly the same things as the cars they will be giving up – carry five people, travel at 70 miles per hour or more, have a capacious boot, carry forward all the crumple zone technology which keeps us safe, up to a point, in the event of a crash – and it is worth remembering that it has been determined that once over 30 m.p.h. – there is very little that a driver can do to influence the outcome of a crash and also, that the speed of impact in a collision between two cars is the sum of their two speeds so two cars travelling at 30 m.p.h. smash together at 60 m.p.h. Of course, if we travelled much slower, then such high-speed collisions and the ingenious and weighty crumple zones which have been designed to protect us in such events, would be less necessary.

What has disappointed me about these new electric cars is that they look exactly as before, take the same massive amount of energy and share of Earth’s resources to construct, worse if you think of the issues around Lithium for the batteries and they are, at this stage affordable only by the relatively rich. I had imagined a wholly new style of lightweight runarounds – cars perhaps a little like the Smart cars – economical two-seaters, but if you want to achieve all the old requirements of a car listed above, then you have to go big – as big as existing cars, because you cannot fit a big enough battery to supply the range or speed to carry the weigh of all that crumple zone protective steel in a small car. It just can’t be done, and in any case, the first adopters of electric cars are the better off and it has been much easier to sell them the idea of going green by simply changing to a “greener” propulsion of an equally capable large car rather than that of a small one which they are not in the market for anyway.

Small is Beautifull…1

If we accept the reality that people who live and mostly drive only in cities, neither need a high-performance large car nor can utilise the capacities of such cars due to congestion slowing things down, then would it not be better, for when you need to use a car, to have a small electric vehicle! There is a class of cars termed Neighbourhood Electric Vehicles (how friendly does that sound!)  which are small, light weight, much less greedy in manufacturing and material costs, and travel at the modest speeds actually suitable for driving in urban environs. That they have not gained greater popularity is in part due to the hazard of driving such lightweight cars in a mix with larger, faster, heavier beasts and partly due to the failure to grasp the necessity of switching to such vehicles.

The  Renault Twizy  was launched in Europe in 2012 and it is classified as a heavy quadricycle. (Wikipedia)

Electric cars are not a new idea, in fact, they were quite common until petrol engines gained ascendancy, what about the electric milk-float – operating with heavy lead-acid batteries rather than the current lighter-weight Lithium ones, these workhorses carried a good load, travelled at moderate speeds and had enough range to get the job done – proof that a realistic spec. for an urban vehicle is eminently possible!

An electric milk float in Liverpool city centre, June 2005 (Wikipedia)

How Do We Get There From Here?

What if, instead of trying to change habits by applying punitive Low Emission and Congestion Charges to big cities (which become rallying points for the motoring rights lobby) we take joined-up government decisions to promote cities being small electric vehicle zones? What does joined up mean in this context? It means first promoting the manufacture of such vehicles up to and past the point of mass adoption, facilitating charging points by legislation where necessary (ie. all new homes to include them), and only then enforcing the adoption of the small vehicles in cities. Those who want to retain their old-style monsters might be forced to leave them in outskirt car parks and continue their journeys inward with a small vehicle or a bus. To get there from here is never going to be a single simple solution but always many parts that will eventually be greater than the sum of those parts.

My vision of the future is that there will be a two-tier division between town and country and it will probably reflect the difference in wealth that already exists between many (but not all) rural dwellers and those who live in towns and cities. Having said that, there will be a levelling up in many ways –  more bus routes in town and out; smaller electric busses, electric vehicles in town and even in the country, they could be used to get to the station or connect with a bus route; status will not be judged by the car you own, in fact, ownership might not be the prime model.

New Ways of Doing Things

In Liverpool they have a company operating a fleet of electric scooters that registered users can pick up and use wherever they find them and are charged by distance travelled. All GPS locatable, the company goes round at night and collects the scooters to take back to base for recharging. Imagine how this might work for Neighbourhood Electric Vehicles. For sure you couldn’t go around with a car transporter picking them up at night – but wait! There are other clever technological solutions available! Firstly, if you operate a reasonably expensive drone, the last thing you want is for it to run out of power and crash miles away from you – so they have a sensor such that when they only have just enough power to get back to you – that’s exactly what they do – turn tail and use GPS to automatically get themselves back to you. Combine that with driverless cars – and a company could automatically retrieve vehicles back to base and deliver them to customers’ doorsteps for whenever they have been ordered. One of the issues around driverless technology is that it is aiming to be used in current type motor cars up to and including a mix of lorries and cars on motorways – imagine how much safer it would be if the technology were used to control new, slower, lighter vehicles around town.

There are so many aspects of our necessary divorce from the motorcar as we now use it and just as the effects of the growth of the car brought incremental consequences, so the solution to how to adopt a new relationship must also be incremental and multi-faceted.
Let me leave you with one last anecdote that shows the interconnected nature of things. There are fleets of electric buses already operating and some of them are contributing to solving one of the problems of the regulation of power supply – especially renewable power supply which can be occasionally irregular. When the busses return to their depots after taking commuters home, they arrive back at the very peak time when those same commuters are cooking their suppers. So the buses can return whatever charge is left in their batteries to the grid and then later in the night, when the TV goes off, the buses recharge their batteries ready for the morning. Building storage batteries to smooth out the supply and demand for energy from renewable sources is one of the major drawbacks and costs to switching to those power sources but in this case, we can see how buses, and perhaps even private electrical vehicles could become part of the battery solution – food for thought…

What can you do?

The most useful thing that an individual can do at this moment is to examine the situation both personally and in terms of the wider picture – take an inventory of your own circumstances and figure out whether you could change things in your own relationship with the car as things stand now. If not – then what would need to change to make it possible for you – petition for more bus routes; wait for electric vehicles to get smaller and more affordable; buy a bike and start to use it for more than just leisure or exercise; discover car-sharers in your area or advertise if you can’t find any. So much depends on attitudinal change that you might start discussions with other people to explore the problems and solutions – knowledge and insight are vital to change.

And please, if you have responses, questions or opinions on what I have said – post a comment and start the discussion…

Image generated with Midjourney AI by Andrew Wilson

© Andrew Wilson, 2023

  1. – “Small Is Beautiful: A Study of Economics as if People Mattered” is the title of a book by  E F Schumacher.

Exploring and Evaluating Generative AI Number Five – Barglefloop, Confusing the AI

My AI muse Misky, recently made a post entitled Barglefloop and I quote her “barglefloop, means to mess with words in your AI prompt in order to confuse it, to turn nouns into verbs, to make single words compound, etc.” It occurred to me to experiment by adding the same prompt to Midjourney as Misky had used and see if the AI came back with the same images as she got – it did not! Here are the first four I got using the prompt “Barglefloop

Where Misky’s images tended towards Hieronymous Bosch – mine were Harry Potter meets Lord of the Rings Rivertown. Below is the fourth image enlarged to show the level of detail the AI has put in…

But what does this tell us about the way Midjourney works – that Barglefloop is nonsense and so the AI creates whatever it wants to – let off the leash so to speak? I decided to add some more nonsense “barglefloop female foxing blithy toves” – Foxing – as a noun gone verb, and “slithy toves” from Lewis Carrol’s Jabberwocky (’twas brillig and the slithy toves…). This time the AI seized on the only bit of the prompt that made sense Fox and gave me four fox pictures, ignoring the rest of the senseless prompt – so Midjourney, whilst known to hallucinate as much as the next AI was not so desperate to act freely – released by nonsense…

Cute but no cigar for Midjourney so now I went for all the nonsense, none of the foxing around “barglefloop blithy toves” and now we get something quite nightmarish in a Snarky/Jabberwocky Carrollian sort of way, with perhaps a hint of Bosch…

So lastly I decided to miss out the Barglefloop and just retain the Lewis Carrol words “twas brillig and the blithy toves” and now we can see an Alice in Wonderland flavour to the images – particularly the top two…

Lastly, I asked for variations on the bottom right image – a rather mad-looking figure with a slightly Victorian flavour…

Does this tell us much about the way the “mind” of an AI works – I will let you be the judge – and stay away from hookah-smoking caterpillars seated on toadstools – my advice…

Exploring and Evaluating Generative AI Number Four – Working with AI and the Women’s Issue…

During the month of April this year, whilst participating in the A to Z Challenge, I was privileged to encounter the work of Misky whose blog It’s Still Life, showcases two distinct things, poetry written by Misky and illustrated using Generative Artwork created by Misky using the Midjourney AI app. So amazing were these images to someone who is in part, a visual artist, that it inspired me to make an exploration of Generative AI for myself. At the same time, AI has been hitting the headlines big time and mainly for its use in text generation and the impact it might have on jobs and since writing is another thing that I do in my day job, I was also intrigued to see whether AI might be of any use in a company such as I work for. (I am the gradually retiring General Manager of a food manufacturing company). It has been a fascinating voyage of discovery and to cap it all, lying awake at 4 o’clock this morning, I found myself listening to “The Conversation” on the BBC World Service and what should be the topic, but AI with special reference to the involvement of women. So – mind on fire, I am going to draw this series together, although I freely acknowledge I have but dipped my toe in the waters of AI and I may return to the topic in the future…

To recap the three articles I have already written:- In the first one, I tried out ChatGPT to see what it research and write about one of the topics from my A to Z and immediately encountered the phenomena of AI hallucination – the ability, in fact tendency of AI to make things up. I also “showcased” my first attempts at visual collaboration with the Midjourney bot .
In the second report, I compared ChatGPT to Writesonic which produces more lengthy articles – testing them against a typical (for me) work assignment.
In the third report, I looked at the most controversial assertion about AI – that AI might in the future, eliminate human beings – Terminator-style and referenced articles that thoroughly refute the need to worry about that particular outcome – go re-assure yourselves! However, there are many things about our present and future use of AI that do bear looking at and these were raised in the episode of “The Conversation” that woke me up this morning. The programme, presented by a woman, featured two women working in the field of AI, one a philosopher and one an expert in data analysis and as well as the general concerns that need addressing about AI, they highlighted the general lack of representation of women in the field of AI – only one CEO, qualifying women failing to get jobs in the industry and so on. They did however point out that one of the changes to AI itself in recent times, has been the accessibility of use – no longer do you need to have a degree in computer programming – you could make your first interaction with ChatGPT in the same time it would take you to query something on Google. Which brings me back to Misky…

Misky was not only the inspiration for my (deepish?) dive into AI, but was extremely helpful and encouraging to me at the outset, itself a reflection of how women tend to be more collaborative, good team players – a fact which the contributors to “The Conversation” suggested is a good reason for women to me more involved in AI companies, in reviewing the implications and in forming the regulation which is undoubtedly necessary around AI. A few days ago, I was delighted to meet Misky face-to-face on a Zoom call after many text interactions online and one of the things that she shared in our too-brief call, was that she had had some push-back from certain readers of her blog, about the use of AI images. I would like to talk to her more about these issues, but the participants in “The Conversation” raised the issue of how artists, whose work has been studied by AI to create new images “in the style of”, are being short-changed. You may have been wondering about the image at the top of this post – I created in Midjourney by prompting it to “imagine” Knaresborough railway viaduct “in the style of Hokusai” – a master of Japanese woodblock prints. I have used this subject as my test piece for exploring what Midjourney can do as you will see in the previous post. Now Hokusai is long dead and so the issue of compensation is hardly an issue, but another group of more recent artists might object. I am working on a spoof post – “How to Make a Body” a tale of human reproduction in the style of an Internet recipe ad although, like Misky, the writing is all my own, I wanted an illustration to fit with the tone of the piece and prompted Midjourney to “imagine” a woman in a hospital bed, holding her newborn baby and with her husband leaning in “in the style of a Ladybird book cover”. For those of you who may not be familiar with Ladybird books, they were written for children starting in the 1940’s and running until the 1980’s and they feature a distinct style of illustration.

In recent years, a series of spoof books in the Ladybird style and aimed at those who had grown up with the original series, have been vert successful, for example…

I had no idea whether Midjourney would be able to fulfil my prompt, there are lists of artists’ styles you can use with Midjourney but I hadn’t seen this one – I was not disappointed!

I am keeping my powder dry as to the final image I chose but this first set of four (Midjourney shows off by producing not one, but four attempts in under sixty seconds) – which was done to the prompt of “A new mother in a hospital bed with her husband leaning in as she holds their new baby in the style of a Ladybird Book Cover” has misunderstood my intention and the mother is holding a magazine rather than a baby – though the graphic style is very Ladybird book-like. I acknowledge that I am still only a beginner in my use of prompts with all the forms of AI I have tried so far and there is undoubtedly an “art” to getting it right which is why I said “I created in Midjourney”. Although I am a competent watercolourist, screen-printer and other forms of illustrative art, I could not produce images such as the above and certainly not in sixty seconds. So, how much of this creation is my prompt, how much is the brilliant programming behind Midjourney and how much is owed to the various artists who could produce the illustrations of the Ladybird books? I cannot begin to answer that question but it does raise an issue which needs considering in formulating regulation around the use of AI. Meanwhile, like Misky and I, jump in and have a go and get a feel for yourself of the answer to the god-like feeling of creating with an AI tool…

Much of the debate around the consequences of the rise of AI, is around its impact on jobs and the potential losses and gains. As I described in my first report, the development of computer spreadsheets swept away the lowly positions in Accountancy but opened up many more jobs at the high end of the profession and although this might be the hope for AI, that it liberates us from the menial and allows us to create new roles – roles which might be beyond the capability of AI to imagine, at present, it is not just the menial tasks that are being threatened by bots like ChatGPT, but some roles higher up in various industries. Having said that, given the tendency of AI’s to hallucinate, I wouldn’t trust an AI’s writing without an experienced human checking the output of any writing before sending it out! Also, when you are a creative individual yourself, then trying to get AIs to produce exactly what you have in mind is tricky. In my 2021 A to Z challenge, I was trying to complete a science-fiction novel and the exercise gave me enough momentum to indeed finish it a few months later. Then I set about creating a book cover for it – to feature the final denouement – a tense scene set in a space-elevator on the edge of space. I prepared the background view by Photoshopping some NASA photographs looking the length of the Red Sea towards Palestine, painted in a great river estuary as per my planet, and then superimposed some 3D elements which I drew up in AutoCAD and finally added the title and my name. You can see this below, however, I felt that the result was not quite up to the standard of artwork commissioned by big sci-fi publishers and imagined that in the unlikely event of the novel being published, an improved version of the cover would be substituted for my “sketch”.

© Andrew Wilson 2022

Back to today, and naturally, I thought it would be a good test of Midjourney to see whether it could be used to produce a better version of my cover. Well, the first attempts were brilliant style-wise, but nothing like the image I wanted and many attempts followed to no avail…

My prompt read “space lift arriving at 300 miles above Earth like planet over Sahara like region array of cargo containers spread out in one layer small spaceship approaching“Midjourney couldn’t understand Space lift and I had to change lift to elevator, it couldn’t understand “array of cargo containers” but it did have all the sci-fi style I wanted. So then I decided to create a space view background without the lift and substitute it into my own cover illustration. Bingo!

© Andrew Wilson 2023

Still I hanker for the crisply detailed images of the elevator that Midjourney is capable of if only I could prompt it correctly – so a work in progress… What this exercise does show, is that it is possible to use AI for the things it can do better in combination with human talent.

In Conclusion…

This exploration of AI has felt like a marathon and it is just one person’s experience and I am really only at the beginning of my exploration, I’m sure I will find both text and image-generative bots to be of use in my future work and play. I urge you all to experiment for yourselves, form your own judgements (and please share your results by linking in the comments), join the debate over the regulation of AI, and explore other artists, in particular, Misky, who began this journey…

Exploring and Evaluating Generative Artificial Intelligence Number Three

I decided to make a Header Image (above) for this little series of posts and have retrofitted it to the two previous posts here and here. So I asked the Midjourney app on Discord, to depict a silver-skinned Android, firstly, standing at an easel painting, and then at a computer typing. I am fairly sure that the AI known as Midjourney had no sense of the irony of asking it to anthropomorphise an Android doing these activities, because current forms of AI are so far from having the sentience required to appreciate concepts as subtle as irony. Spoiler alert, I approached this evaluative exploration with certain preconceptions about the likely conclusion although I didn’t know for sure, how those conclusions might be reached because I didn’t know how AI’s work, in detail. What I am going to show you today is what I have learned, but I am also going to link you to a very erudite analysis of why we should not be worried about AI taking over the world – in a piece calledWhy the AI Doomers Are Wrong“, Nafeez Ahmed explains why the direction of travel of AI development, simply can’t lead to a human-brain-like sentience. I will quote from his article later.

First of all, look at the left-hand side of the header picture, in particular, the easel. On close inspection, you can see that the easel is the wrong way round and that the painter/android, is standing behind the easel. Midjourney produces four images by default, in the remarkable time of about 60 seconds which is almost like magic – indeed, in 1962, Arthur C. Clarke, a science fiction writer, stated in his book “Profiles of the Future: An Inquiry into the Limits of the Possible” that Any sufficiently advanced technology is indistinguishable from magic. So despite the apparent magic of creating these images so quickly, the AI has made a fundamental mistake that reveals that it doesn’t really understand what an easel is or how it should be used. Nafeez Ahmed is mostly talking about text generative interactions with AI – ChatGPT and the like, but what he says below, is equally applicable to images generated by AI…

The stunning capabilities of the new chatbots and their uncanny ability to engage in human-like conversations has sparked speculation about sentience. Yet there is zero tangible evidence of this beyond a sense of amazement.
What many miss is that the amazing nature of these conversations is not a function of an internal subjective ‘understanding’ by an identifiable entity, but rather the operation of algorithms made-up of hundreds of billions of parameters trained on massive datasets of human text to predict words that should follow given strings of text. {…} This is not a breakthrough in intelligence, although it is, certainly, a breakthrough in being able to synthesise human responses to similar questions and thereby mimic patterns in human interaction. This model of AI, therefore, cannot, in itself, generate fundamentally new knowledge or understanding – let alone sentience.

Nafeez Ahmed

Nafeez goes into great detail about how the research is headed in the wrong direction and indeed, how it is unlikely it is that it will ever succeed in equating to human sentience, so if you want to put your mind at rest about a Terminator-style future in which humans are subjugated by machines – nip on over and read the full article. Meanwhile I am going to show you some more examples of how Midjourney gets things “wrong” and how to get the “right” results and what that says about how useful such a programme can be.

You interact with the Midjourney app by sending it a message (just as if it was really an intelligent entity) containing a prompt, and once you receive your four images, you can choose one to enlarge, if you are satisfied with it, or run variations on one or all of them. Here is the prompt that produced the above set of images. “Silver android painting at an easel by Roy Lichtenstenstein” – the AI places most importance on the object at the beginning of the prompt, then on the activity described and lastly, it attempts to produce the images, in this case, in the style of the Pop Artist Roy Lichtenstein – famous for painting s in the style of close-ups of comic book pictures. These close-ups show the dot screens that were used to shade the illustrations of the comic book plus the hard black outlining and Midjourney has picked up well on these style features, particularly the top right and bottom left pictures. The top-left shows a sculpture vaguely resembling an up-cycled easel made of silver and the bottom right shows a silver-skinned figure with dot-screen effect, holding a brush and painting a picture but with no easel. In the [op-right picture, the top of the easel is just showing in the bottom corner and the android “artist” is holding a small blank canvas in her hand and drawing on it. Having seen the header image at top, and these pictures were as near as I could get to what I wanted, from multiple attempts, you can see that what I wanted was an all-over silver-skinned android and in the images above, top-right has a human face although “her” body is robotic – perhaps cyborg is a better description, whilst the other pictures show a sculpture, a woman and a totally abstract figure. So I decided to change the prompts to “Robot” rather than “Android” which produced better results. The reason I had started with “Andriod” was because robots range from automatic hoovers that move around your rooms looking like little miniature flying saucers sucking up dirt to more anthropomorphic devices – which is what I wanted.

“standing silver robot painting at an easel by Roy Lichtenstein” produced(among others) the above image in which the robot, possibly standing, is grasping what looks like the top of an easel but the “painting” does not appear to be on the easel. So I tried “Robot standing painting at an easel” and got this rather cute robot who looks like he is sitting on an invisible chair – “Hey Midjourney” just because you don’t show the chair, doesn’t make it standing!” Notice that with the style reference to Roy Lictensten gone, this image is very different. I would like to show you more of the iterations but Midjourney froze and when I reinstalled it, it had lost the entire session of work – you just can’t get the staff…

Another thing that I have discovered in my experiments, is that both Midjourney and ChatGPT, like to add unspecified embellishments – remember in my first report, how ChatGPT found the correct explanation for the phrase “Cold enough to freeze the balls off a brass monkey” but then added a totally made up explanation? Well Midjourney does the same thing too. Here is a picture of the railway viaduct at Knaresborough in West Yorkshire , an hours drive from where I live.

I wanted to see if Midjourney could produce a collage image using fragments of maps which it tried but didn’t really understand the concept – although I am not saying that it can’t, but at the very least, my prompt wasn’t sufficient (one of the oldest sayings amongst computer programmers is “Garbage in – garbage out!”) Here is Midjourneys best effort…

There are some map elements and the whole scene has been chopped up and rearranged but not in a way that makes sense – this one is closer to the real view…

But my first attempt, before I added the collage style, was simply to see how Midjourney would find and represent the viaduct and it generated the four images below. The top left image, Midjourney has added another railway running beneath the viaduct, likewise, lower-left it has added a canal and in the images on the right, Midjourney has transported us into a past sans Knaresborough and a post apocalyptic future where vegetation is growing over the viaduct.

Enough with all the pretty pictures – what does all this reveal about the way that the AI Midjourney works! Referring to the work – Erik J. Larson in his book, The Myth of Artificial Intelligence, Nafeez Ahmed cites a summary of the work by Ben Chugg, lead research analyst at Stanford University (Iknow – quotes within quotes) as follows:-

“Larson points out that current machine learning models are built on the principle of induction: inferring patterns from specific observations or, more generally, acquiring knowledge from experience. This partially explains the current focus on ‘big-data’ — the more observations, the better the model. We feed an algorithm thousands of labelled pictures of cats, or have it play millions of games of chess, and it correlates which relationships among the input result in the best prediction accuracy. Some models are faster than others, or more sophisticated in their pattern recognition, but at bottom they’re all doing the same thing: statistical generalization from observations.
This inductive approach is useful for building tools for specific tasks on well-defined inputs; analyzing satellite imagery, recommending movies, and detecting cancerous cells, for example. But induction is incapable of the general-purpose knowledge creation exemplified by the human mind.”

https://towardsdatascience.com/the-false-philosophy-plaguing-ai-bdcfd4872c45

Nafeez goes on:-

Current AI has become proficient at both deductive and inductive inference, with the latter becoming a primary focus.
Larson points out that human intelligence is based on a far more creative approach to generating knowledge called ‘abduction’. Abductive inference allows us to creatively select and test hypotheses, quickly eliminate the ones which are proven wrong, and create new ones as we go along before reaching a reliable conclusion. “We guess, out of a background of effectively infinite possibilities, which hypotheses seem likely or plausible,” writes Larson in The Myth of Artificial Intelligence. {…}
And here is Larson’s killer diagnosis: We don’t have a good theory of how abductive inference works in the human mind, and we have no idea how to recreate abductive inference for AI: “We are unlikely to get innovation if we choose to ignore a core mystery rather than face it up,” he writes with reference to the mystery of human intelligence.
Before we can generate genuine artificial intelligence that approaches human capabilities, we need a philosophical and scientific revolution that explains abductive inference. “As long as we keep relying on induction, AI programs will be forever prediction machines hopelessly limited by what data they are fed”, explains Chugg

https://www.bylinesupplement.com/p/why-the-ai-doomers-are-wrong?utm_source=substack&utm_medium=email

To relate this back to my experiments with Midjourney, the AI could identify what an easel looked like and include it in an image but it didn’t really know what it was or how it was used. Easels are probably present in thousands of pictures of artist’s studios as well as adverts but I bet there isn’t a Painters 101 that “First you will need an easel and this is how you use it” because when a human artist goes into a studio and sees canvasses fixed into easels, even if he has never seen one before, he is there to paint canvasses and it is obvious what they are and what they are for. It might be obvious to a human being with our ability to use inference, deduction and abductive capabilities, but an AI might identify an easel, but without finding a clear description of the usage, it cannot fully fathom how to use it…

As for the the tendency to add extraneous details, well the algorithms that govern Generative AI’s, are designed to mimic human conversational style, so when it has found a relevant answer to the requested information or task, it looks to extend the conversation in what it has learned might possibly follow – it doesn’t know whether it is true or not, because that is way above it’s paygrade ( a metaphor which it probably wouldn’t understand either). This phenomena of AI’s making things up is called hallucination – a very anthropocentric term…

I will make one more report on my attempts to get exactly what I wanted from Midjourney and how I found a compromise to be able to work with the results…

Exploring and Evaluating Generative Artificial Intelligence, Number Two…

ChatGPT is not the only flavour…

ChatGPT is the AI offering from OpenAI and is probably the most talked about Bot since its launch last year – however it is not the only one so today I am going to compare it with Writesonic and see how they handle a piece of promotional writing. I still work 2.5 days as a manager at a gelato and pudding factory – sounds great unless you are diabetic like me. Anyway, we are trialling some new gelato flavours and I thought it a good moment to see how an AI generative writer might handle promoting this.

ChatGPT

Having no idea what might emerge, I started by putting a very simple prompt in “Promote a new gelato flavour – Caramel Popcorn Gelato” and this is what emerged a minute or so later.

“Introducing the newest addition to the world of gelato: Caramel Popcorn Gelato! This delectable treat is the perfect blend of sweet and savoury, with creamy caramel gelato swirled with crunchy popcorn pieces.
One spoonful of this delicious gelato and you’ll be transported to your favourite movie theatre, with the aroma of freshly popped popcorn and the indulgent taste of caramel. The creamy texture of the gelato perfectly complements the crispy texture of the popcorn, making for a truly delightful dessert experience.
Whether you’re looking for a sweet treat to enjoy on a warm summer day or a cosy dessert for a movie night in, Caramel Popcorn Gelato is sure to satisfy your cravings. So go ahead and indulge in this delicious new flavour – you won’t be disappointed!”

I have changed the spelling of Flavor to Flavour but otherwise, this is straight from the horse’s mouth and for so little guidance, is a pretty remarkable effort in just 128 words. As a marketing piece it leads straight to the movie/popcorn evocation as well as describing the mouthfeel and flavour.

Writesonic

The process of specifying and even joining the bot service was much more complicated – it assumed that I was a business user and gave me 10,00 free words to trial the product. Then it asked for a Subject for the piece and offered to search for Keywords if I did not supply them, which I did – the name of the flavour, it asked d what sort of Tone the piece should have of which I picked Excited, and it then generated Ideas for the article followed by a default three Outlines for an Article. I could have had more outlines but reading through the three that was quite sufficient and I then had a chance to add or remove the ten or so points that the article would consist of and I narrowed it down to five points – the rest would be useful further along in the promotion. Finally I pressed Generate and in short order got my article. I had tried to indicate a length of 130 words to be compatible with ChatGPT but Writesonic did its own thing – 465 words which together with the preliminary stages, reduced my 10,000 words to 9,271. Here it is, complete with photo and in the style of a blog post article:

A Delicious Breakthrough: New Caramel Popcorn Flavour Gelato

This was the image that Writesonic sourced from the internet which it merely credited to Unsplash – I have reverse-searched and added the full credit link –
Photo by Toa Heftiba on Unsplash

As a gelato lover, I’m always on the lookout for exciting and unique flavours to try. So, when I heard about the new caramel popcorn flavour gelato, I couldn’t wait to give it a taste. As soon as I took my first spoonful, I knew that this flavour was going to be a game-changer. In this blog post, I’m going to share my thoughts on this delicious breakthrough.

The Inspiration Behind the Creation of the Flavour

What inspired the gelato makers to create a caramel popcorn flavour? The answer is simple: the love of movies. Think about it, what’s a movie without a tub of popcorn? And what’s better than caramel popcorn? That’s right, a caramel popcorn flavour gelato!

The gelato makers wanted to create a flavour that would bring the movie theatre experience to your taste buds. They experimented with different combinations of caramel and popcorn, trying to find the perfect balance. They wanted to create a flavour that was sweet and buttery, with a hint of saltiness. After many attempts, they finally found the perfect recipe.

How to Pair the Caramel Popcorn Flavour Gelato

Now that we know the story behind the creation of the flavour, let’s talk about how to pair it. The caramel popcorn flavour gelato is perfect for those who love sweet and salty combinations. It’s a great flavour to enjoy on its own, but it can also be paired with other flavours to create a delicious dessert.

One of my favourite ways to enjoy the caramel popcorn flavour gelato is to pair it with a chocolate flavour. The richness of the chocolate complements the sweetness of the caramel, while the saltiness of the popcorn adds a nice contrast. Another great pairing is with a fruit flavour, such as raspberry or strawberry. The tartness of the fruit provides a nice balance to the sweetness of the caramel popcorn.

Conclusion and Final Thoughts on the New Flavour

In conclusion, the new caramel popcorn flavour gelato is a delicious breakthrough that’s perfect for movie lovers and gelato enthusiasts alike. The gelato makers behind this flavour deserve a round of applause for their creativity and dedication to bringing new and exciting flavours to the table.

If you haven’t tried the caramel popcorn flavour gelato yet, I highly recommend giving it a taste. It’s a flavour that will take your taste buds on a journey and leave you wanting more. So, go ahead and indulge in this sweet and salty treat – you won’t regret it!

CTA: Head to your nearest gelato shop and try the new caramel popcorn flavour gelato today!

Which was best?

ChatGPT did an amazing job for such a skinny prompt, but I can see that Writesonic would deliver a much more refined article with a lot more options to craft the result, which was fully SEO optimised, for those of you with a nerdy disposition.

A Midjourney image generated by the prompt –
salted caramel popcorn flavour gelato in the style of a Baskin-Robbins ad

To conclude, I am going to show you another gorgeous image generated by Midjourney AI bot from the prompt “Wild Rose and Lonicera in the style of Tenniel –version 5 –tile”…

Exploring and Evaluating Generative Artificial Intelligence, Number One…

AI or Artificial Intelligence, is a hot topic at present with governments weighing up how to invest in the sector, insiders from the industry warning of the potential dangers and the media stoking fears from job losses to the eradication of the human race. During the recent A to Z Challenge 2023, I was highly impressed by the images selected by Misky in her blog IT’S STILL LIFE – A.I. ART & POETRY and decided to investigate the whole subject of AI for myself. I also wanted to know about Generative Text AI because my work demands various forms of word creation and I had heard that this might be one of the first areas to suffer job losses to AI.

This tile design was designed by Midjourney 5 with a prompt of no more than Passion Flower –tile. It lines up perfectly on every edge, a task which is difficult to do by hand…
Using a third-party software, you can turn Midjourney’s tile into wallpaper and appreciate the seamless matching of edges.

But first, a little thought about where I understand us to be right now. There is Soft AI (think Alexa) and then there is the dreaded Hard AI (think Terminator). Soft AI is already in use all around us, Grammarly, Predictive Text on your smartphone, Alexa, Google and other search engines but the reason people are not running around screaming “The terminators are coming!” as a result of all this Soft AI action, is because the intelligence quotient of this sort of AI is pretty low. For example, say to your “smart” speaker Alexa “Alexa! Tell me a joke!” and Alexa will oblige with a joke of variable humour, but then say “Alexa! Tell me another!” and Alexa will not know what you want – unless you specify “joke” in which case you could leave the “another” out since it is irrelevant to Alexa that she has just told you a joke. This means that Alexa is hardly conducting a conversation. On the other hand, she can sort through literally millions of bits of data to find and play you the song you want to hear. So this sort of AI is doing exactly what we want -conducting the repetitive, boring, difficult (because of the magnitude of data) tasks which we none of us really enjoy doing, or given the scale of data today, could even do! I was reading how when the sea-change brought about in Accountancy when computers could sit on every desk and more particularly, when those computers could run spreadsheets, there were fears of mass redundancies in accounting firms. And so there were at the clerk level of the business, but on the other hand, amongst the higher level accountants, jobs boomed like never before, because the things they could do with spreadsheets, the new things they could offer their clients had been undreamable before – so let us take the tales of mass unemployment with a pinch of salt – after all, I wouldn’t mind betting that some of the accounting clerks were able to upgrade their skills and move up the ladder with the help of a computer…

Above is an example of a tricky task for any graphic designer to do manually – to design a seamless tile repeat and I will come back to this particular task in the future, but to say that this task was accomplished in under sixty seconds, is for now simply amazing!

Text AI

So first of all, I want to look at my first encounter with ChatGPT which was developed by OpenAI and which they describe as a Large Language Model Generative pre-trained transformer Chatbot. What that means is that an AI computer programme has been fed a huge amount of written material, novels, adverts, and non-fiction from the World Wide Web and not only does the AI have access to all this material, but it can stitch information together according to very clever algorithms to write “original” material. So I thought I would test it out by asking it to write a piece about one of the more colourful phrases from my recent A to Z Challenge – phrases we mostly know the meaning of but have forgotten the origins of. First, here is my explanation:

Cold Enough to Freeze the Balls Off a Brass Monkey!” – you may be beginning to see the “lost in translation” aspect of sayings overheard from sailors by landsmen in which case, you will not be surprised to learn that this expression has nothing to do with statues of simians cast in brass losing their genitalia!

A “monkey” on a wooden ship, especially a warship, was a (usually) wooden tray with rounded depressions in which cannon balls were stacked in preparation for the battle – the last thing you wanted was heavy cannon balls rolling around the deck of a pitching deck – let alone a “loose cannon” – so cannons were secured (against recoil on firing) by strong ropes and cannon balls were kept on a monkey. I said that monkeys were usually made of wood, but admirals or very lucky captains, who had made a lot of money from their share of “prizes” (captured ships, evaluated and paid for by the Navy) – were allowed to prettify their ships with gold leaf, dress their crews in custom, fancy uniforms, and purchase brass monkeys rather than the standard issue wooden ones. Now here’s the thing, a monkey was carefully designed so that you could pile the cannon balls up in a pyramid to maximise your supply of cannon balls in as small a space as possible but the thermal coefficient of expansion of brass (the monkey) and iron (the cannon balls) is different – the brass monkey shrinks more than iron in very cold conditions. Now, the carefully spaced second and subsequent layers of the pyramid are too big for their positions and can roll off the monkey hence “Cold Enough to Freeze the Balls Off a Brass Monkey!” – Simples! Not!

Now here is ChatGPT’s first attempt:
“The expression “cold enough to freeze the balls off a brass monkey” is a well-known phrase that means that it is extremely cold outside. Although it is a common phrase, its origin and accuracy are often disputed.
One theory suggests that the phrase originates from the age of sailing ships when cannonballs were stored on the deck of the ship in a brass frame called a “monkey.” The frame consisted of several round holes where the cannonballs were stacked. During cold weather, the brass frame would contract more quickly than the iron cannonballs, causing them to fall out of the frame. Hence, the phrase “cold enough to freeze the balls off a brass monkey.”
However, there is little historical evidence to support this theory. The phrase “brass monkey” itself is not found in any nautical dictionaries or records, and the idea that brass contracts more quickly than iron is also questionable.

Another theory is that the phrase originated from a Victorian-era pub game called “brass monkey,” which involved placing brass balls on a tray and attempting to knock them off with a ball bearing. The game was typically played in cold weather, and if the temperature dropped low enough, the brass balls could freeze to the tray, making it impossible to play. Hence, the phrase “cold enough to freeze the balls off a brass monkey” may have originated from this game.
However, like the previous theory, there is little concrete evidence to support this theory.
Overall, the origin of the phrase “cold enough to freeze the balls off a brass monkey” remains uncertain. It is likely that the phrase has evolved over time, and its origins may be lost to history.”

So what do we think of this? Well firstly, the “the idea that brass contracts more quickly than iron is also questionable ” is simply not true – science tells us that brass both expands and contracts more than iron – so there is a lie right there! Secondly, Chat GPT says that there is no evidence of the phrase in “nautical dictionaries or records. Whether or not such dictionaries even exist, there are plenty of references to this particular phrase and it’s origin in many places on the web and in literature – I first heard of it in the books of Patrick O’Brien who spent much of his life carrying out research for his books in the Admiralty records and I would trust him implicitly. You have to find as many references as you can when researching a subject like this. There are, of course, always people who will dispute the origins of various phrases and so perhaps ChatGPT gave undue credence to these doubters. Still, then ChatGPT goes even further and completely fabricates an alternative theory! The idea of a “Victorian -era pub game called ‘brass monkey’…'” does not show up in any search of the internet whatsoever, so ChatGPT has not only dissed the correct explanation, but suggested a completely fictitious alternative – all done with a convincingly authoritative tone. Luckily, there is a Regenerate button and on dressing this, without any refining the prompt or critiquing the first result, ChatGPT came clean and produced a correct version of the explanation:
“The expression “cold enough to freeze the balls off a brass monkey” is a colloquial phrase used to describe extremely cold weather.
There are a few different theories about the origin of this expression, but the most widely accepted one is that it comes from the era of sailing ships.
In the days of sail, cannons were an important part of a ship’s armament. The cannonballs were made of iron and were stored on deck in piles or pyramids. To prevent them from rolling around in rough seas, the cannonballs were often stacked on a flat brass plate called a monkey.
When the temperature dropped below freezing, the brass monkey would contract more than the iron cannonballs, causing them to fall off. Hence, it was said to be “cold enough to freeze the balls off a brass monkey.”
However, it’s worth noting that there is no definitive proof for this theory, and other explanations have been proposed as well. Nonetheless, the phrase has become a part of English language and culture, and it is still used today to describe extremely cold weather.”

The second iteration is both factually more correct and more concise, but for a first encounter with ChatGPT to throw up such a blatant fabrication is worrying. It suggests that however miraculous the task of researching and collating this story might be, it definitely needs checking by a human being because otherwise, another false news item might find its way onto the Internet and go echoing around and around… Just such a story of defamation by a ChatGPT “news” story, happened to an Australian Mayor who rather than being portrayed as the whistleblower he actually was, he was reported to have gone to jail for corruption – he is suing ChatGPT‘s developers, OpenAI.

Not that I am fishing for compliments but what do you think of my explanation of the phrase compared to ChatGPT’s? After all, you might think I was too verbose and ChatGPT more concise…

Image AI

So far, I have only tried out the Midjourney bot (short for Robot, which I discussed in my “R” post in April) but I will be trying and comparing them and reporting back to you Dear Reader, but for now, I just want to share some images that I have created which were successful, although sometimes after several attempts to refine the prompts – more of that next time…

Each submission or message to the Midjourney Bot, produces not one, but four variations for you to choose from. You may then ask for an enlargement of one (see below) or to take one as a starting point for variation and another four will be developed. The prompt for this group was “flower_fairies_playing_amongst_blackberry_stems_and_fruit”
Here is the enlargement of the top left image.

I then decided to turn this into a tiled image (below).

The prompt for this image was “Village_beside_lake_style_of_Charles_Rennie_Mackintosh_pallette knife_oil_painting. I think the syntax should have been “in the style of” so I’m not sure that this owes much to Mackintosh but it is great – I think…

So we (if you want to come along with me) are on our way to investigating AI – so far Text AI dodgy, and ImageAI – amazing!

Landscape or Portrait?

Yesterday saw the launch of the iPad and finally, for computers, the question of Landscape or Portrait becomes irrelevant. In a way, it has been quite arbitrary that our screens have always been landscape when so much of what we have done on them is to create documents that will be printed in Portrait orientation. What about photographs I hear you say, but when computers started out, their makers could hardly have dreamed of the current blossoming of digital images poured onto a world wide web by non-technical bods. Perhaps they oriented their “Monitors” landscape-wise because they resembled TV screens and no they probably weren’t thinking that one day you would be able to watch TV on your computer! Of course, back in those days, you couldn’t see a representation of your document on screen – just text and that was in a primitive screen font – no fonts, styles, WYSIWIG or GUI. You had to type till your screen was full and then press the PrtScn key to send all that text to the printer then Scroll down to some fresh screen and carry on. Computers are like archaeological sites with bits of the past sticking up here and there – the PrtScn key is still there and very useful it still it is. It has a slightly different usefulness but its really still doing the same thing – it sends whatever is on the Screen to the Clipboard from whence you can paste it into a graphics programme and have your wicked way with it.
Going back to Photos you might argue that most of them are landscape but when they are not then they end up being so much smaller in Portrait than their broad bottomed sisters and brothers sitting in Landscape – either that or YOU get a crick in the neck looking at them. Of course, with the advent of laptops you could stand them on their side – the keyboard side forming a handy prop but still not very handy for editing.
When the web page came along things got more complicated still – how big is a web page when you come to print it – after all some pages just go on and on or should that be down and down! Sometime ago, on the now defunct Mo’time blog platform, I attempted to create pages that looked the same both on screen and when printed out. You can still see them and judge their success at
http://www.artscastle.com/Motime_Miscellany/ .
Now I dare say there were already some snazzy (and expensive) monitors that could be swizzled round to show things in portrait but it was mobile phones that really gave us the two way screen – those little computers in our pocket – those little gems of consumer lust. Re-orient the picture on the screen and the Portrait/Landscape dilemma is a thing of the past and of course if you own an iPhone with its quintessential Apple design ethos then the edge to edge picture gives no clue as to whether the device even has an orientation. So now, in a convergence of computer, picture, phone and almost anything else you can think of and write an “App” for – carried forward to the iPad we can at last choose to view photos and documents in either Portrait or Landscape but you will be able to use the qwerty keyboard at the bottom of the portrait touch-screen to edit a document in Portrait!
How do you like them Apples?