Exploring and Evaluating Generative AI Number Five – Kickback and Compromise…

It is 18 months since my last essay exploring AI back in 2023 and there have been many developments since then – certainly more and more people, from poets of my acquaintance to businesses, have explored and made use of AI in one form or another and the industry is full of startups offering AI solutions in all sorts of areas such as creating training videos in which you prime the AI with the content you want delivered and the AI concocts slides with a voiceover or even a fully imagined video trainer to articulate your training needs.

Public awareness has continued to grow, the alarm over AI taking away jobs, or taking over the world and eradicating humans is perhaps less hysterical and the debate more focused. One way in which this is happening is that writers and artists have challenged the AI companies for the currently unregulated and voracious use of their (the creatives) material in training the AI’s LLM’s or Large Language Models. These are the vast bodies of existing work, written and visual, that are fed to AI’s and from which they both learn and plagiarise when prompted to generate an image or a piece of writing “in the style of”. When I first started exploring Generative AI, these ethical battlelines were not so apparent. Still, now we must seriously consider the ethical questions raised by how we choose to use AI – especially when we reference existing artwork or literature. You may feel that the damage is already done, the genie already out of the bottle and that there is no point in bolting the stable door after the horse has bolted, but the laws need changing to protect the copywriter issues advanced by creatives and perhaps there will emerge a statute of limitations so that older work, out of copywrite is usable whilst currently, copyrighted material is either out of bounds or attracts a fee for the use of… Below is an illustration I tried and purely for experiment, I asked for it to be “in the style of” Studio Ghibli – a Japanese animation studio based in Koganei, Tokyo. The exact prompt was “Alice in Wonderland at the Court of the Red Queen in the style of studio ghibli directed by Hayao Miyazaki

As you can see, Alice is there, in a dining room sumptuous enough to be that of the Red Queen but there are no other characters present and the AI (Midjourney) has become fixated on the Fly Agaric toadstools that Alice found the hookah-smoking Caterpillar sitting on. Does this image owe much to Studi Ghibli and Director Hayao Miyazaki – it certainly could be an animation style – I don’t know his work well enough – I just saw the style in a list of things you could prompt AI with and decided to experiment – but the fact that the AI recognised the name of Studio Ghibli means that it was trained, at some point, by looking at the studio’s work…

So I have decided, for three reasons, to be more circumspect about my use of “in the style of”. Firstly out of fairness to current creatives, secondly because it remains as difficult as ever to get AI to produce the image exactly as you have in your mind’s eye, and lastly I have made less use of AI images to accompany poems – in part because the brilliance of them, not only illustrates the poem but threatens to eclipse or distract from it. However, whilst I am a competent artist in some respects, I am not an illustrator with a wide range of drawing skills and so below, I am going to show you the blend of AI-generated image elements and their combination in PhotoShop to arrive at an image I had in mind for a commissioned illustration. My friend Melissa Lemay, is launching an online journal called Collaborature to showcase collaborative poems and works of literature as well as interviews with authors – she sent me her mission statement and gave me carte blanche to produce an illustration for the launch…

My idea is to have a woman absorbed in reading a book with a “thought bubble” rising up into a night sky showing the moon, and a rocket on it’s way to the moon – all inspired by her reading…

To begin with, I decided on a black-and-white illustration with “drawn” elements combined, which made it slightly easier to achieve consistency. Firstly I wanted a young woman but drawn “in the style of” the E.H.Shepard illustration from “Now We Are Six” by A.A. Milne. – or rather, I wanted her in the pose below, which is not quite the same thing…

The results below, despite what I thought to be a very detailed prompt describing the young woman, her clothes and her pose, was not right…

Black and white line drawing of a young woman wearing a sleeveless dress with hemline just above the knee lying on her stomach elbows stretched out and head supported by hands reading a book propped up in front of her legs bent at the knee and bent upwards in the style of E.H.Shepard

Cetainly I think the style has little to do with A.H.Shepard and in the lower left picture, the young woman, far from being excited, her imagination fired by reading, has fallen asleep! I tried making variations but nothing worked any better so I then decided to try for a picture of a young girl instead and got the result below.

Black and white line drawing of a young girl wearing a sleeveless mini-dress with hemline just above the knee lying flat on her stomach head supported by hands reading a book propped up in front of her legs bent back and over her knees in the style of E.H.Shepard

Once again we have a sleeping beauty, but I decided I could accept the top left image. Next I wanted to have a thought bubble form the girl, featuring a rocket to the moon inspired in the girl’s imagination, by her reading. I was remembering the Moon face in the 1902 film A Trip to the Moon (French: Le voyage dans la lune) by pioneer film director Georges Méliès.

And so I used the prompt “Black and white line drawing of moon against a black background in the style of Georges Melies” to obtain this:-

Nothing like George Melies’ image so no qualms about using the quite straight forward “drawing-style” moon. Next to a rocket, and I have always loved the rocket (was it inspired by the German V2 rockets?) from The Adventures of Tintin, the comics series by Belgian cartoonist Hergé.

And so using this prompt “Black and white rocket against a dark grey background in the style of Tintin Explorers on the Moon” I obtained these images from Midjourney.

Once again, I don’t think the similarity of the rockets warrants the term plagiarism so much as “inspired by…” and I was happy to go with the bottom right image.

Now that I had all the elements, it was time to start assembling them in Adobe PhotoShop – this process is essentially like building a collage except that you can go back and work on each layer ad infinitum, resizing, adjusting the tones, cropping etc. even rearranging the order of the layers – so what hides what…

Having decided on the size and proportion of the Background, I placed the girl before adding a Gradient layer for the night sky. Then, having pasted multiple copies of the Mission Statement text, I placed a text layer in white text so that it fades out at the bottom of the picture but is readable against the black of the sky. I then brought the girl to the front again as the white letters were going over her.

The white letters looked too strong so I increased their transparency to tone them down. Next I added the thought bubbles giving them white edges to stand out and then placed the moon into the large bubble.

I could have added the rocket within the thought bubble, like the moon, but I thought it added to the portrayal of the act of imagination if it came from outside the thought bubble – as if it had come from a different bubble perhaps… Lastly, the rocket looked too static and so I added some “motion streaks” to complete the picture.

So there you have it – each element of the picture was produced by Midjourney generative AI, yet I could never have got an AI to see and conform to the design I had in mind and so I had to assemble them in the way I wanted, after the event and this is one way in which I think AI can be used to aid the graphic designer – after all, collage of existing print material is a very old tradition…

Poetry Postcard Fest Follow Up Post 2024 #6

Dear Roberta
Using an AI to make images
can be wondrous,
frustrating, puzzling
and incredible in varying degrees
but sometimes, the sheer beauty
simply makes one gasp…

This last year, both I and many poets I know (in the Internet sense as opposed to the real world or the biblical), have started illustrating their work using AI images whose results are sometimes so stunning as to distract from the poems being illustrated. Yet we poets often work to prompts and we must construct prompts for the AI…

The Poetry Postcard Fest is a challenge which encourages poets to write an unedited poem on a postcard and send it to a stranger. Organised by the Cascadia Poetics Lab, who organise the participants into lists of 31 + yourself for you to address your offerings to. This was my second year and I was on List 10. The lists are sent out in early July and you have until the end of August to send out your missives – to date I have received 16 of 31 possibles and now that we are into September, it is allowable to share the cards and poems you sent.
Although the original poem is to be sent as written – crossings out, blots and all, I have typed them out for people who can’t read my writing and I am allowing myself to edit if I feel like it…

A-Z Challenge 2024 Reflections Post

For a list of all my A-Z posts click the A-Z 2024 Menu Button at top or go back in time with 2020 – 2023…

I already began the process of reflection on my 2024 A-Z Challenge in my X is for eXigesis post – that being my way around the lack of an X commodity so I will cover slightly different ground here. To my mind, there would be no point in setting out on the journey which is A- Z, if I were to pick a theme about which I was so well versed that there was nothing to be learned by me too! If that were the case then I doubt whether the writing would be as compelling as I hope it has been for each of the years I have participated -five now… I like to learn new things and I feel that my excitement informs the writing. For example, I learned at school, about the theft of rubber plants that broke the Brasilian monopoly on Rubber and I had read a book called “Genocide” about the Belgian atrocities in the Congo, but somehow it all came together in a more insightful way in my “R” post – and those insights into the history of empires, geo-politics, and the drivers of economics were repeated time after time with the story of different commodities.

Talking of “well versed” – I made it a double challenge this year by writing a poem – also with an alphabetically matching form following a plunge into poetry after the 2023 A-Z which saw a lot of poets visiting my site and I have certainly had my poetic mind stretched by this exercise. I feel I can understand what it might be like to be a Poet Laureate and to have to deliver poems on demand on unlikely or difficult sources of inspiration – Royal Weddings, Openings and Deaths… Many poets online write to all sorts of prompts every day (and I know some who write to several each day) but if the muse don’t move you – you don’t have to do it – this poetic challenge on the other hand, was determined by the intersection of the commodity and the poetry form of the day and so included a Haiku about Heating Oil, an Ideogramme about Investment and a Pylon Poem poem about Palladium!

Each year since my 2020 pantsing it debut in A-Z (I only discovered it on April 1st) I have vowed to have all my posts pre-prepared and this year I was halfway there at the start of the month and had the last post finished two days ahead of time with only finishing touches like links to other posts to be done the night before – but one thing I simply forgot to do this year, was to use the daily letters at the top of the page which is a shame because Ronel did an excellent job of creating them and indeed all the graphics this year – I will do better next year and I see that the graphic for the 2025 Theme Reveal is already designed! So yes I will be back – all being well, and I already have a theme planned for 2025…

I enjoyed the visits of the team who were supportive as always and of course it was a joy to visit and be visited by others on the Challenge and I will be reviewing them and others I intend to visit now that the madness of April is over – in Roadtrip posts. One thing that surprises me (each year) is how few people Comment and take the chance of posting the link to their daily post, on the official April Blogging from A to Z Challenge #AtoZChallenge daily post – after all – is this not the perfect gathering place at which to check in and start your daily promotion. I start each morning by promoting my post both at the official site and on my Facebook page (which then shows up on Instagram). I’m not sure that they bring in many visits but then the best and most enjoyable method of self-promotion is via visiting and commenting on the blogs of fellow travellers…

Favourite comment: “I think you’ve got the weirdest theme out there for this A to Z. Congrats. This is brilliant. And I’m rolling my eyes at the poem.” by Liz A of lawsofgravity.blogspot.com on Soybeans and a Solage.

Lastly, I like to illustrate my posts copiously, either with photos gleaned from the web or generated by my prompts in Midjourney – so here is a small selection of my favourite Midjourney pictures from this year.

And here it is – the reward for all that effort…

Exploring and Evaluating Generative AI Number Five – Barglefloop, Confusing the AI

My AI muse Misky, recently made a post entitled Barglefloop and I quote her “barglefloop, means to mess with words in your AI prompt in order to confuse it, to turn nouns into verbs, to make single words compound, etc.” It occurred to me to experiment by adding the same prompt to Midjourney as Misky had used and see if the AI came back with the same images as she got – it did not! Here are the first four I got using the prompt “Barglefloop

Where Misky’s images tended towards Hieronymous Bosch – mine were Harry Potter meets Lord of the Rings Rivertown. Below is the fourth image enlarged to show the level of detail the AI has put in…

But what does this tell us about the way Midjourney works – that Barglefloop is nonsense and so the AI creates whatever it wants to – let off the leash so to speak? I decided to add some more nonsense “barglefloop female foxing blithy toves” – Foxing – as a noun gone verb, and “slithy toves” from Lewis Carrol’s Jabberwocky (’twas brillig and the slithy toves…). This time the AI seized on the only bit of the prompt that made sense Fox and gave me four fox pictures, ignoring the rest of the senseless prompt – so Midjourney, whilst known to hallucinate as much as the next AI was not so desperate to act freely – released by nonsense…

Cute but no cigar for Midjourney so now I went for all the nonsense, none of the foxing around “barglefloop blithy toves” and now we get something quite nightmarish in a Snarky/Jabberwocky Carrollian sort of way, with perhaps a hint of Bosch…

So lastly I decided to miss out the Barglefloop and just retain the Lewis Carrol words “twas brillig and the blithy toves” and now we can see an Alice in Wonderland flavour to the images – particularly the top two…

Lastly, I asked for variations on the bottom right image – a rather mad-looking figure with a slightly Victorian flavour…

Does this tell us much about the way the “mind” of an AI works – I will let you be the judge – and stay away from hookah-smoking caterpillars seated on toadstools – my advice…

Exploring and Evaluating Generative AI Number Four – Working with AI and the Women’s Issue…

During the month of April this year, whilst participating in the A to Z Challenge, I was privileged to encounter the work of Misky whose blog It’s Still Life, showcases two distinct things, poetry written by Misky and illustrated using Generative Artwork created by Misky using the Midjourney AI app. So amazing were these images to someone who is in part, a visual artist, that it inspired me to make an exploration of Generative AI for myself. At the same time, AI has been hitting the headlines big time and mainly for its use in text generation and the impact it might have on jobs and since writing is another thing that I do in my day job, I was also intrigued to see whether AI might be of any use in a company such as I work for. (I am the gradually retiring General Manager of a food manufacturing company). It has been a fascinating voyage of discovery and to cap it all, lying awake at 4 o’clock this morning, I found myself listening to “The Conversation” on the BBC World Service and what should be the topic, but AI with special reference to the involvement of women. So – mind on fire, I am going to draw this series together, although I freely acknowledge I have but dipped my toe in the waters of AI and I may return to the topic in the future…

To recap the three articles I have already written:- In the first one, I tried out ChatGPT to see what it research and write about one of the topics from my A to Z and immediately encountered the phenomena of AI hallucination – the ability, in fact tendency of AI to make things up. I also “showcased” my first attempts at visual collaboration with the Midjourney bot .
In the second report, I compared ChatGPT to Writesonic which produces more lengthy articles – testing them against a typical (for me) work assignment.
In the third report, I looked at the most controversial assertion about AI – that AI might in the future, eliminate human beings – Terminator-style and referenced articles that thoroughly refute the need to worry about that particular outcome – go re-assure yourselves! However, there are many things about our present and future use of AI that do bear looking at and these were raised in the episode of “The Conversation” that woke me up this morning. The programme, presented by a woman, featured two women working in the field of AI, one a philosopher and one an expert in data analysis and as well as the general concerns that need addressing about AI, they highlighted the general lack of representation of women in the field of AI – only one CEO, qualifying women failing to get jobs in the industry and so on. They did however point out that one of the changes to AI itself in recent times, has been the accessibility of use – no longer do you need to have a degree in computer programming – you could make your first interaction with ChatGPT in the same time it would take you to query something on Google. Which brings me back to Misky…

Misky was not only the inspiration for my (deepish?) dive into AI, but was extremely helpful and encouraging to me at the outset, itself a reflection of how women tend to be more collaborative, good team players – a fact which the contributors to “The Conversation” suggested is a good reason for women to me more involved in AI companies, in reviewing the implications and in forming the regulation which is undoubtedly necessary around AI. A few days ago, I was delighted to meet Misky face-to-face on a Zoom call after many text interactions online and one of the things that she shared in our too-brief call, was that she had had some push-back from certain readers of her blog, about the use of AI images. I would like to talk to her more about these issues, but the participants in “The Conversation” raised the issue of how artists, whose work has been studied by AI to create new images “in the style of”, are being short-changed. You may have been wondering about the image at the top of this post – I created in Midjourney by prompting it to “imagine” Knaresborough railway viaduct “in the style of Hokusai” – a master of Japanese woodblock prints. I have used this subject as my test piece for exploring what Midjourney can do as you will see in the previous post. Now Hokusai is long dead and so the issue of compensation is hardly an issue, but another group of more recent artists might object. I am working on a spoof post – “How to Make a Body” a tale of human reproduction in the style of an Internet recipe ad although, like Misky, the writing is all my own, I wanted an illustration to fit with the tone of the piece and prompted Midjourney to “imagine” a woman in a hospital bed, holding her newborn baby and with her husband leaning in “in the style of a Ladybird book cover”. For those of you who may not be familiar with Ladybird books, they were written for children starting in the 1940’s and running until the 1980’s and they feature a distinct style of illustration.

In recent years, a series of spoof books in the Ladybird style and aimed at those who had grown up with the original series, have been vert successful, for example…

I had no idea whether Midjourney would be able to fulfil my prompt, there are lists of artists’ styles you can use with Midjourney but I hadn’t seen this one – I was not disappointed!

I am keeping my powder dry as to the final image I chose but this first set of four (Midjourney shows off by producing not one, but four attempts in under sixty seconds) – which was done to the prompt of “A new mother in a hospital bed with her husband leaning in as she holds their new baby in the style of a Ladybird Book Cover” has misunderstood my intention and the mother is holding a magazine rather than a baby – though the graphic style is very Ladybird book-like. I acknowledge that I am still only a beginner in my use of prompts with all the forms of AI I have tried so far and there is undoubtedly an “art” to getting it right which is why I said “I created in Midjourney”. Although I am a competent watercolourist, screen-printer and other forms of illustrative art, I could not produce images such as the above and certainly not in sixty seconds. So, how much of this creation is my prompt, how much is the brilliant programming behind Midjourney and how much is owed to the various artists who could produce the illustrations of the Ladybird books? I cannot begin to answer that question but it does raise an issue which needs considering in formulating regulation around the use of AI. Meanwhile, like Misky and I, jump in and have a go and get a feel for yourself of the answer to the god-like feeling of creating with an AI tool…

Much of the debate around the consequences of the rise of AI, is around its impact on jobs and the potential losses and gains. As I described in my first report, the development of computer spreadsheets swept away the lowly positions in Accountancy but opened up many more jobs at the high end of the profession and although this might be the hope for AI, that it liberates us from the menial and allows us to create new roles – roles which might be beyond the capability of AI to imagine, at present, it is not just the menial tasks that are being threatened by bots like ChatGPT, but some roles higher up in various industries. Having said that, given the tendency of AI’s to hallucinate, I wouldn’t trust an AI’s writing without an experienced human checking the output of any writing before sending it out! Also, when you are a creative individual yourself, then trying to get AIs to produce exactly what you have in mind is tricky. In my 2021 A to Z challenge, I was trying to complete a science-fiction novel and the exercise gave me enough momentum to indeed finish it a few months later. Then I set about creating a book cover for it – to feature the final denouement – a tense scene set in a space-elevator on the edge of space. I prepared the background view by Photoshopping some NASA photographs looking the length of the Red Sea towards Palestine, painted in a great river estuary as per my planet, and then superimposed some 3D elements which I drew up in AutoCAD and finally added the title and my name. You can see this below, however, I felt that the result was not quite up to the standard of artwork commissioned by big sci-fi publishers and imagined that in the unlikely event of the novel being published, an improved version of the cover would be substituted for my “sketch”.

© Andrew Wilson 2022

Back to today, and naturally, I thought it would be a good test of Midjourney to see whether it could be used to produce a better version of my cover. Well, the first attempts were brilliant style-wise, but nothing like the image I wanted and many attempts followed to no avail…

My prompt read “space lift arriving at 300 miles above Earth like planet over Sahara like region array of cargo containers spread out in one layer small spaceship approaching“Midjourney couldn’t understand Space lift and I had to change lift to elevator, it couldn’t understand “array of cargo containers” but it did have all the sci-fi style I wanted. So then I decided to create a space view background without the lift and substitute it into my own cover illustration. Bingo!

© Andrew Wilson 2023

Still I hanker for the crisply detailed images of the elevator that Midjourney is capable of if only I could prompt it correctly – so a work in progress… What this exercise does show, is that it is possible to use AI for the things it can do better in combination with human talent.

In Conclusion…

This exploration of AI has felt like a marathon and it is just one person’s experience and I am really only at the beginning of my exploration, I’m sure I will find both text and image-generative bots to be of use in my future work and play. I urge you all to experiment for yourselves, form your own judgements (and please share your results by linking in the comments), join the debate over the regulation of AI, and explore other artists, in particular, Misky, who began this journey…

Exploring and Evaluating Generative Artificial Intelligence Number Three

I decided to make a Header Image (above) for this little series of posts and have retrofitted it to the two previous posts here and here. So I asked the Midjourney app on Discord, to depict a silver-skinned Android, firstly, standing at an easel painting, and then at a computer typing. I am fairly sure that the AI known as Midjourney had no sense of the irony of asking it to anthropomorphise an Android doing these activities, because current forms of AI are so far from having the sentience required to appreciate concepts as subtle as irony. Spoiler alert, I approached this evaluative exploration with certain preconceptions about the likely conclusion although I didn’t know for sure, how those conclusions might be reached because I didn’t know how AI’s work, in detail. What I am going to show you today is what I have learned, but I am also going to link you to a very erudite analysis of why we should not be worried about AI taking over the world – in a piece calledWhy the AI Doomers Are Wrong“, Nafeez Ahmed explains why the direction of travel of AI development, simply can’t lead to a human-brain-like sentience. I will quote from his article later.

First of all, look at the left-hand side of the header picture, in particular, the easel. On close inspection, you can see that the easel is the wrong way round and that the painter/android, is standing behind the easel. Midjourney produces four images by default, in the remarkable time of about 60 seconds which is almost like magic – indeed, in 1962, Arthur C. Clarke, a science fiction writer, stated in his book “Profiles of the Future: An Inquiry into the Limits of the Possible” that Any sufficiently advanced technology is indistinguishable from magic. So despite the apparent magic of creating these images so quickly, the AI has made a fundamental mistake that reveals that it doesn’t really understand what an easel is or how it should be used. Nafeez Ahmed is mostly talking about text generative interactions with AI – ChatGPT and the like, but what he says below, is equally applicable to images generated by AI…

The stunning capabilities of the new chatbots and their uncanny ability to engage in human-like conversations has sparked speculation about sentience. Yet there is zero tangible evidence of this beyond a sense of amazement.
What many miss is that the amazing nature of these conversations is not a function of an internal subjective ‘understanding’ by an identifiable entity, but rather the operation of algorithms made-up of hundreds of billions of parameters trained on massive datasets of human text to predict words that should follow given strings of text. {…} This is not a breakthrough in intelligence, although it is, certainly, a breakthrough in being able to synthesise human responses to similar questions and thereby mimic patterns in human interaction. This model of AI, therefore, cannot, in itself, generate fundamentally new knowledge or understanding – let alone sentience.

Nafeez Ahmed

Nafeez goes into great detail about how the research is headed in the wrong direction and indeed, how it is unlikely it is that it will ever succeed in equating to human sentience, so if you want to put your mind at rest about a Terminator-style future in which humans are subjugated by machines – nip on over and read the full article. Meanwhile I am going to show you some more examples of how Midjourney gets things “wrong” and how to get the “right” results and what that says about how useful such a programme can be.

You interact with the Midjourney app by sending it a message (just as if it was really an intelligent entity) containing a prompt, and once you receive your four images, you can choose one to enlarge, if you are satisfied with it, or run variations on one or all of them. Here is the prompt that produced the above set of images. “Silver android painting at an easel by Roy Lichtenstenstein” – the AI places most importance on the object at the beginning of the prompt, then on the activity described and lastly, it attempts to produce the images, in this case, in the style of the Pop Artist Roy Lichtenstein – famous for painting s in the style of close-ups of comic book pictures. These close-ups show the dot screens that were used to shade the illustrations of the comic book plus the hard black outlining and Midjourney has picked up well on these style features, particularly the top right and bottom left pictures. The top-left shows a sculpture vaguely resembling an up-cycled easel made of silver and the bottom right shows a silver-skinned figure with dot-screen effect, holding a brush and painting a picture but with no easel. In the [op-right picture, the top of the easel is just showing in the bottom corner and the android “artist” is holding a small blank canvas in her hand and drawing on it. Having seen the header image at top, and these pictures were as near as I could get to what I wanted, from multiple attempts, you can see that what I wanted was an all-over silver-skinned android and in the images above, top-right has a human face although “her” body is robotic – perhaps cyborg is a better description, whilst the other pictures show a sculpture, a woman and a totally abstract figure. So I decided to change the prompts to “Robot” rather than “Android” which produced better results. The reason I had started with “Andriod” was because robots range from automatic hoovers that move around your rooms looking like little miniature flying saucers sucking up dirt to more anthropomorphic devices – which is what I wanted.

“standing silver robot painting at an easel by Roy Lichtenstein” produced(among others) the above image in which the robot, possibly standing, is grasping what looks like the top of an easel but the “painting” does not appear to be on the easel. So I tried “Robot standing painting at an easel” and got this rather cute robot who looks like he is sitting on an invisible chair – “Hey Midjourney” just because you don’t show the chair, doesn’t make it standing!” Notice that with the style reference to Roy Lictensten gone, this image is very different. I would like to show you more of the iterations but Midjourney froze and when I reinstalled it, it had lost the entire session of work – you just can’t get the staff…

Another thing that I have discovered in my experiments, is that both Midjourney and ChatGPT, like to add unspecified embellishments – remember in my first report, how ChatGPT found the correct explanation for the phrase “Cold enough to freeze the balls off a brass monkey” but then added a totally made up explanation? Well Midjourney does the same thing too. Here is a picture of the railway viaduct at Knaresborough in West Yorkshire , an hours drive from where I live.

I wanted to see if Midjourney could produce a collage image using fragments of maps which it tried but didn’t really understand the concept – although I am not saying that it can’t, but at the very least, my prompt wasn’t sufficient (one of the oldest sayings amongst computer programmers is “Garbage in – garbage out!”) Here is Midjourneys best effort…

There are some map elements and the whole scene has been chopped up and rearranged but not in a way that makes sense – this one is closer to the real view…

But my first attempt, before I added the collage style, was simply to see how Midjourney would find and represent the viaduct and it generated the four images below. The top left image, Midjourney has added another railway running beneath the viaduct, likewise, lower-left it has added a canal and in the images on the right, Midjourney has transported us into a past sans Knaresborough and a post apocalyptic future where vegetation is growing over the viaduct.

Enough with all the pretty pictures – what does all this reveal about the way that the AI Midjourney works! Referring to the work – Erik J. Larson in his book, The Myth of Artificial Intelligence, Nafeez Ahmed cites a summary of the work by Ben Chugg, lead research analyst at Stanford University (Iknow – quotes within quotes) as follows:-

“Larson points out that current machine learning models are built on the principle of induction: inferring patterns from specific observations or, more generally, acquiring knowledge from experience. This partially explains the current focus on ‘big-data’ — the more observations, the better the model. We feed an algorithm thousands of labelled pictures of cats, or have it play millions of games of chess, and it correlates which relationships among the input result in the best prediction accuracy. Some models are faster than others, or more sophisticated in their pattern recognition, but at bottom they’re all doing the same thing: statistical generalization from observations.
This inductive approach is useful for building tools for specific tasks on well-defined inputs; analyzing satellite imagery, recommending movies, and detecting cancerous cells, for example. But induction is incapable of the general-purpose knowledge creation exemplified by the human mind.”

https://towardsdatascience.com/the-false-philosophy-plaguing-ai-bdcfd4872c45

Nafeez goes on:-

Current AI has become proficient at both deductive and inductive inference, with the latter becoming a primary focus.
Larson points out that human intelligence is based on a far more creative approach to generating knowledge called ‘abduction’. Abductive inference allows us to creatively select and test hypotheses, quickly eliminate the ones which are proven wrong, and create new ones as we go along before reaching a reliable conclusion. “We guess, out of a background of effectively infinite possibilities, which hypotheses seem likely or plausible,” writes Larson in The Myth of Artificial Intelligence. {…}
And here is Larson’s killer diagnosis: We don’t have a good theory of how abductive inference works in the human mind, and we have no idea how to recreate abductive inference for AI: “We are unlikely to get innovation if we choose to ignore a core mystery rather than face it up,” he writes with reference to the mystery of human intelligence.
Before we can generate genuine artificial intelligence that approaches human capabilities, we need a philosophical and scientific revolution that explains abductive inference. “As long as we keep relying on induction, AI programs will be forever prediction machines hopelessly limited by what data they are fed”, explains Chugg

https://www.bylinesupplement.com/p/why-the-ai-doomers-are-wrong?utm_source=substack&utm_medium=email

To relate this back to my experiments with Midjourney, the AI could identify what an easel looked like and include it in an image but it didn’t really know what it was or how it was used. Easels are probably present in thousands of pictures of artist’s studios as well as adverts but I bet there isn’t a Painters 101 that “First you will need an easel and this is how you use it” because when a human artist goes into a studio and sees canvasses fixed into easels, even if he has never seen one before, he is there to paint canvasses and it is obvious what they are and what they are for. It might be obvious to a human being with our ability to use inference, deduction and abductive capabilities, but an AI might identify an easel, but without finding a clear description of the usage, it cannot fully fathom how to use it…

As for the the tendency to add extraneous details, well the algorithms that govern Generative AI’s, are designed to mimic human conversational style, so when it has found a relevant answer to the requested information or task, it looks to extend the conversation in what it has learned might possibly follow – it doesn’t know whether it is true or not, because that is way above it’s paygrade ( a metaphor which it probably wouldn’t understand either). This phenomena of AI’s making things up is called hallucination – a very anthropocentric term…

I will make one more report on my attempts to get exactly what I wanted from Midjourney and how I found a compromise to be able to work with the results…

Exploring and Evaluating Generative Artificial Intelligence, Number Two…

ChatGPT is not the only flavour…

ChatGPT is the AI offering from OpenAI and is probably the most talked about Bot since its launch last year – however it is not the only one so today I am going to compare it with Writesonic and see how they handle a piece of promotional writing. I still work 2.5 days as a manager at a gelato and pudding factory – sounds great unless you are diabetic like me. Anyway, we are trialling some new gelato flavours and I thought it a good moment to see how an AI generative writer might handle promoting this.

ChatGPT

Having no idea what might emerge, I started by putting a very simple prompt in “Promote a new gelato flavour – Caramel Popcorn Gelato” and this is what emerged a minute or so later.

“Introducing the newest addition to the world of gelato: Caramel Popcorn Gelato! This delectable treat is the perfect blend of sweet and savoury, with creamy caramel gelato swirled with crunchy popcorn pieces.
One spoonful of this delicious gelato and you’ll be transported to your favourite movie theatre, with the aroma of freshly popped popcorn and the indulgent taste of caramel. The creamy texture of the gelato perfectly complements the crispy texture of the popcorn, making for a truly delightful dessert experience.
Whether you’re looking for a sweet treat to enjoy on a warm summer day or a cosy dessert for a movie night in, Caramel Popcorn Gelato is sure to satisfy your cravings. So go ahead and indulge in this delicious new flavour – you won’t be disappointed!”

I have changed the spelling of Flavor to Flavour but otherwise, this is straight from the horse’s mouth and for so little guidance, is a pretty remarkable effort in just 128 words. As a marketing piece it leads straight to the movie/popcorn evocation as well as describing the mouthfeel and flavour.

Writesonic

The process of specifying and even joining the bot service was much more complicated – it assumed that I was a business user and gave me 10,00 free words to trial the product. Then it asked for a Subject for the piece and offered to search for Keywords if I did not supply them, which I did – the name of the flavour, it asked d what sort of Tone the piece should have of which I picked Excited, and it then generated Ideas for the article followed by a default three Outlines for an Article. I could have had more outlines but reading through the three that was quite sufficient and I then had a chance to add or remove the ten or so points that the article would consist of and I narrowed it down to five points – the rest would be useful further along in the promotion. Finally I pressed Generate and in short order got my article. I had tried to indicate a length of 130 words to be compatible with ChatGPT but Writesonic did its own thing – 465 words which together with the preliminary stages, reduced my 10,000 words to 9,271. Here it is, complete with photo and in the style of a blog post article:

A Delicious Breakthrough: New Caramel Popcorn Flavour Gelato

This was the image that Writesonic sourced from the internet which it merely credited to Unsplash – I have reverse-searched and added the full credit link –
Photo by Toa Heftiba on Unsplash

As a gelato lover, I’m always on the lookout for exciting and unique flavours to try. So, when I heard about the new caramel popcorn flavour gelato, I couldn’t wait to give it a taste. As soon as I took my first spoonful, I knew that this flavour was going to be a game-changer. In this blog post, I’m going to share my thoughts on this delicious breakthrough.

The Inspiration Behind the Creation of the Flavour

What inspired the gelato makers to create a caramel popcorn flavour? The answer is simple: the love of movies. Think about it, what’s a movie without a tub of popcorn? And what’s better than caramel popcorn? That’s right, a caramel popcorn flavour gelato!

The gelato makers wanted to create a flavour that would bring the movie theatre experience to your taste buds. They experimented with different combinations of caramel and popcorn, trying to find the perfect balance. They wanted to create a flavour that was sweet and buttery, with a hint of saltiness. After many attempts, they finally found the perfect recipe.

How to Pair the Caramel Popcorn Flavour Gelato

Now that we know the story behind the creation of the flavour, let’s talk about how to pair it. The caramel popcorn flavour gelato is perfect for those who love sweet and salty combinations. It’s a great flavour to enjoy on its own, but it can also be paired with other flavours to create a delicious dessert.

One of my favourite ways to enjoy the caramel popcorn flavour gelato is to pair it with a chocolate flavour. The richness of the chocolate complements the sweetness of the caramel, while the saltiness of the popcorn adds a nice contrast. Another great pairing is with a fruit flavour, such as raspberry or strawberry. The tartness of the fruit provides a nice balance to the sweetness of the caramel popcorn.

Conclusion and Final Thoughts on the New Flavour

In conclusion, the new caramel popcorn flavour gelato is a delicious breakthrough that’s perfect for movie lovers and gelato enthusiasts alike. The gelato makers behind this flavour deserve a round of applause for their creativity and dedication to bringing new and exciting flavours to the table.

If you haven’t tried the caramel popcorn flavour gelato yet, I highly recommend giving it a taste. It’s a flavour that will take your taste buds on a journey and leave you wanting more. So, go ahead and indulge in this sweet and salty treat – you won’t regret it!

CTA: Head to your nearest gelato shop and try the new caramel popcorn flavour gelato today!

Which was best?

ChatGPT did an amazing job for such a skinny prompt, but I can see that Writesonic would deliver a much more refined article with a lot more options to craft the result, which was fully SEO optimised, for those of you with a nerdy disposition.

A Midjourney image generated by the prompt –
salted caramel popcorn flavour gelato in the style of a Baskin-Robbins ad

To conclude, I am going to show you another gorgeous image generated by Midjourney AI bot from the prompt “Wild Rose and Lonicera in the style of Tenniel –version 5 –tile”…

Exploring and Evaluating Generative Artificial Intelligence, Number One…

AI or Artificial Intelligence, is a hot topic at present with governments weighing up how to invest in the sector, insiders from the industry warning of the potential dangers and the media stoking fears from job losses to the eradication of the human race. During the recent A to Z Challenge 2023, I was highly impressed by the images selected by Misky in her blog IT’S STILL LIFE – A.I. ART & POETRY and decided to investigate the whole subject of AI for myself. I also wanted to know about Generative Text AI because my work demands various forms of word creation and I had heard that this might be one of the first areas to suffer job losses to AI.

This tile design was designed by Midjourney 5 with a prompt of no more than Passion Flower –tile. It lines up perfectly on every edge, a task which is difficult to do by hand…
Using a third-party software, you can turn Midjourney’s tile into wallpaper and appreciate the seamless matching of edges.

But first, a little thought about where I understand us to be right now. There is Soft AI (think Alexa) and then there is the dreaded Hard AI (think Terminator). Soft AI is already in use all around us, Grammarly, Predictive Text on your smartphone, Alexa, Google and other search engines but the reason people are not running around screaming “The terminators are coming!” as a result of all this Soft AI action, is because the intelligence quotient of this sort of AI is pretty low. For example, say to your “smart” speaker Alexa “Alexa! Tell me a joke!” and Alexa will oblige with a joke of variable humour, but then say “Alexa! Tell me another!” and Alexa will not know what you want – unless you specify “joke” in which case you could leave the “another” out since it is irrelevant to Alexa that she has just told you a joke. This means that Alexa is hardly conducting a conversation. On the other hand, she can sort through literally millions of bits of data to find and play you the song you want to hear. So this sort of AI is doing exactly what we want -conducting the repetitive, boring, difficult (because of the magnitude of data) tasks which we none of us really enjoy doing, or given the scale of data today, could even do! I was reading how when the sea-change brought about in Accountancy when computers could sit on every desk and more particularly, when those computers could run spreadsheets, there were fears of mass redundancies in accounting firms. And so there were at the clerk level of the business, but on the other hand, amongst the higher level accountants, jobs boomed like never before, because the things they could do with spreadsheets, the new things they could offer their clients had been undreamable before – so let us take the tales of mass unemployment with a pinch of salt – after all, I wouldn’t mind betting that some of the accounting clerks were able to upgrade their skills and move up the ladder with the help of a computer…

Above is an example of a tricky task for any graphic designer to do manually – to design a seamless tile repeat and I will come back to this particular task in the future, but to say that this task was accomplished in under sixty seconds, is for now simply amazing!

Text AI

So first of all, I want to look at my first encounter with ChatGPT which was developed by OpenAI and which they describe as a Large Language Model Generative pre-trained transformer Chatbot. What that means is that an AI computer programme has been fed a huge amount of written material, novels, adverts, and non-fiction from the World Wide Web and not only does the AI have access to all this material, but it can stitch information together according to very clever algorithms to write “original” material. So I thought I would test it out by asking it to write a piece about one of the more colourful phrases from my recent A to Z Challenge – phrases we mostly know the meaning of but have forgotten the origins of. First, here is my explanation:

Cold Enough to Freeze the Balls Off a Brass Monkey!” – you may be beginning to see the “lost in translation” aspect of sayings overheard from sailors by landsmen in which case, you will not be surprised to learn that this expression has nothing to do with statues of simians cast in brass losing their genitalia!

A “monkey” on a wooden ship, especially a warship, was a (usually) wooden tray with rounded depressions in which cannon balls were stacked in preparation for the battle – the last thing you wanted was heavy cannon balls rolling around the deck of a pitching deck – let alone a “loose cannon” – so cannons were secured (against recoil on firing) by strong ropes and cannon balls were kept on a monkey. I said that monkeys were usually made of wood, but admirals or very lucky captains, who had made a lot of money from their share of “prizes” (captured ships, evaluated and paid for by the Navy) – were allowed to prettify their ships with gold leaf, dress their crews in custom, fancy uniforms, and purchase brass monkeys rather than the standard issue wooden ones. Now here’s the thing, a monkey was carefully designed so that you could pile the cannon balls up in a pyramid to maximise your supply of cannon balls in as small a space as possible but the thermal coefficient of expansion of brass (the monkey) and iron (the cannon balls) is different – the brass monkey shrinks more than iron in very cold conditions. Now, the carefully spaced second and subsequent layers of the pyramid are too big for their positions and can roll off the monkey hence “Cold Enough to Freeze the Balls Off a Brass Monkey!” – Simples! Not!

Now here is ChatGPT’s first attempt:
“The expression “cold enough to freeze the balls off a brass monkey” is a well-known phrase that means that it is extremely cold outside. Although it is a common phrase, its origin and accuracy are often disputed.
One theory suggests that the phrase originates from the age of sailing ships when cannonballs were stored on the deck of the ship in a brass frame called a “monkey.” The frame consisted of several round holes where the cannonballs were stacked. During cold weather, the brass frame would contract more quickly than the iron cannonballs, causing them to fall out of the frame. Hence, the phrase “cold enough to freeze the balls off a brass monkey.”
However, there is little historical evidence to support this theory. The phrase “brass monkey” itself is not found in any nautical dictionaries or records, and the idea that brass contracts more quickly than iron is also questionable.

Another theory is that the phrase originated from a Victorian-era pub game called “brass monkey,” which involved placing brass balls on a tray and attempting to knock them off with a ball bearing. The game was typically played in cold weather, and if the temperature dropped low enough, the brass balls could freeze to the tray, making it impossible to play. Hence, the phrase “cold enough to freeze the balls off a brass monkey” may have originated from this game.
However, like the previous theory, there is little concrete evidence to support this theory.
Overall, the origin of the phrase “cold enough to freeze the balls off a brass monkey” remains uncertain. It is likely that the phrase has evolved over time, and its origins may be lost to history.”

So what do we think of this? Well firstly, the “the idea that brass contracts more quickly than iron is also questionable ” is simply not true – science tells us that brass both expands and contracts more than iron – so there is a lie right there! Secondly, Chat GPT says that there is no evidence of the phrase in “nautical dictionaries or records. Whether or not such dictionaries even exist, there are plenty of references to this particular phrase and it’s origin in many places on the web and in literature – I first heard of it in the books of Patrick O’Brien who spent much of his life carrying out research for his books in the Admiralty records and I would trust him implicitly. You have to find as many references as you can when researching a subject like this. There are, of course, always people who will dispute the origins of various phrases and so perhaps ChatGPT gave undue credence to these doubters. Still, then ChatGPT goes even further and completely fabricates an alternative theory! The idea of a “Victorian -era pub game called ‘brass monkey’…'” does not show up in any search of the internet whatsoever, so ChatGPT has not only dissed the correct explanation, but suggested a completely fictitious alternative – all done with a convincingly authoritative tone. Luckily, there is a Regenerate button and on dressing this, without any refining the prompt or critiquing the first result, ChatGPT came clean and produced a correct version of the explanation:
“The expression “cold enough to freeze the balls off a brass monkey” is a colloquial phrase used to describe extremely cold weather.
There are a few different theories about the origin of this expression, but the most widely accepted one is that it comes from the era of sailing ships.
In the days of sail, cannons were an important part of a ship’s armament. The cannonballs were made of iron and were stored on deck in piles or pyramids. To prevent them from rolling around in rough seas, the cannonballs were often stacked on a flat brass plate called a monkey.
When the temperature dropped below freezing, the brass monkey would contract more than the iron cannonballs, causing them to fall off. Hence, it was said to be “cold enough to freeze the balls off a brass monkey.”
However, it’s worth noting that there is no definitive proof for this theory, and other explanations have been proposed as well. Nonetheless, the phrase has become a part of English language and culture, and it is still used today to describe extremely cold weather.”

The second iteration is both factually more correct and more concise, but for a first encounter with ChatGPT to throw up such a blatant fabrication is worrying. It suggests that however miraculous the task of researching and collating this story might be, it definitely needs checking by a human being because otherwise, another false news item might find its way onto the Internet and go echoing around and around… Just such a story of defamation by a ChatGPT “news” story, happened to an Australian Mayor who rather than being portrayed as the whistleblower he actually was, he was reported to have gone to jail for corruption – he is suing ChatGPT‘s developers, OpenAI.

Not that I am fishing for compliments but what do you think of my explanation of the phrase compared to ChatGPT’s? After all, you might think I was too verbose and ChatGPT more concise…

Image AI

So far, I have only tried out the Midjourney bot (short for Robot, which I discussed in my “R” post in April) but I will be trying and comparing them and reporting back to you Dear Reader, but for now, I just want to share some images that I have created which were successful, although sometimes after several attempts to refine the prompts – more of that next time…

Each submission or message to the Midjourney Bot, produces not one, but four variations for you to choose from. You may then ask for an enlargement of one (see below) or to take one as a starting point for variation and another four will be developed. The prompt for this group was “flower_fairies_playing_amongst_blackberry_stems_and_fruit”
Here is the enlargement of the top left image.

I then decided to turn this into a tiled image (below).

The prompt for this image was “Village_beside_lake_style_of_Charles_Rennie_Mackintosh_pallette knife_oil_painting. I think the syntax should have been “in the style of” so I’m not sure that this owes much to Mackintosh but it is great – I think…

So we (if you want to come along with me) are on our way to investigating AI – so far Text AI dodgy, and ImageAI – amazing!

R – Robot – robota – Czech for forced labour – “Foreign” words appropriated – Rule of Thumb…

The origin of the word Robot, is the Czech word robota, meaning “forced labour”, from a Slavic root – rab, meaning “slave”. Herein lies much of our fear and angst when we consider the future of robots because Slave implies a Master and so slaves are capable of revolt – of turning against their masters…

The pursuit of developing robots, is that they might assist humans in doing jobs which are too difficult, dangerous or just too plain boring for humans – things which in the past, and even today – (think wage-slaves, modern slavery) – have been done by human slaves and so these fears have a foundation in fact – there have been many slave revolts!

A quote from the play that coined the word “robot”. The author said – “The product of the human brain has escaped the control of human hands. This is the comedy of science.”

The problem of creating robots is twofold – body, and mind. On the body side, we have long had Automatons – they range from say a music box which can play a tune, to the most sophisticated machines that are now being tested for their ability to play football. On the mind side, we have the quest for AI – Artificial Intelligence which is the subject of hot debate at present for reasons varying from “Will AI take our jobs away?” to “Will AI outgrow and destroy human beings?” which brings us back to the man who first coined the word robot in his play “R.U.R.” (which stands for “Rossum’s Universal Robots”) premiered in Prague in 1921, Karel Čapek. Like many of his generation, just out of the horrors of the First World War, Čapek was sceptical of the utopian benefits of science and technology – or rather the uses which human beings put those things. You can read a more detailed account of his play here. But what “R.U.R.” illustrates is that science fiction is the way we explore the possibilities and problems of what may be achieved in the development of robots and our relationship to them.

Starting with the body problem, long before the amazingly intricate creations of the 18th century with watchmaker ingenuity inside, the ancient Egyptians imbued statues with souls and the Greeks envisaged artificial men such as Talos But those later amazing mechanical figures who might play a tune on an inbuilt musical box, are only built to perform one task, albeit a potentially complex task and in this respect, modern technology has created many robots which assist us today without posing any threat except to the workers they superseded. Car plants use many robot arms to manufacture cars with greater strength, dexterity (programmed), speed and accuracy than the human beings who used to do the jobs. Still, without reprogramming, these robot arms do one thing only and their “intelligence” is limited to a programmable computer.

Automaton in the Swiss Museum CIMA. Rama, CC BY-SA 2.0 FR https://creativecommons.org/licenses/by-sa/2.0/fr/deed.en, via Wikimedia Commons

Turning to the issue of robotic minds – Artificial Intelligence is progressing in leaps and bounds, to use an anthropocentric metaphor – several people doing this year’s A to Z Challenge have experimented with AI including Misky who has used the AI graphic app Midjourney to create amazing illustrations for the poems she has posted – check out her site! She has talked about how many times she has to try and prompt the app in order to get these illustrations the way she wants them, tweaking the style and content descriptors and this shows that although the Midjourney app is incredibly powerful, it is still soft AI. The limitations of soft AI are best illustrated by Alexa the Amazon speaker app – instruct it thus “Alexa – tell me a joke!” and she will indeed tell you a joke, but then say “Alexa – tell me another…” and Alexa doesn’t know what you mean unless you specify “another joke”. Hard AI would be the kind which was truly capable of independently sentient thought, and we are some way off from that if Alexa is anything to go by. Alan Turing, imagining (stupendously) the future possibilities of AI devised the Turing Test in which an evaluator would hold two remote conversations, with a human being and with an AI and if they could not distinguish which was the AI, then it might be said to have truly intelligent behaviour. We may be approaching this watershed moment but I like to think that, writing this blog, for example, an AI would not be drawing out the ideas that I have – at least not without close supervision – anyway, I am going to do my own evaluative exploration of AI right here once the A to Z is finished, so watch this space…

The problem for designing robots which are indistinguishable from human beings, is how to cram an AI sizes computer into the body of a robot – Chat GPT can generate text which only another special app can discern to have been written by an AI, but the computers necessary to run ChatGPT are enormous and the miracle which is the human brain is most notable for its compact size – given it’s power. Nevertheless, there are many who are afraid that AI alone, without human-looking robots, can outmatch the human race and destroy it. Of course, we have many science fiction thought experiments to thank for that particular trope from Čapek’s “R.U.R.”, Fritz Lang’s “Metropolis”, through Asimov’s “I Robot”, My own favourite, “Bladerunner” and looming large in these debates about the dangers of rampant AI – the “Terminator” series of films.

(From left) Alfred Abel, Brigitte Helm, and Rudolf Klein-Rogge in Metropolis, directed by Fritz Lang, 1927. From a private collection.

In all these fictional considerations of the relationship between men and machines, different solutions are proposed to keeping the “robota -slaves” in check. Asimov came up with The Three Laws of Robotics

  • First Law: A robot may not injure a human being, or, through inaction, allow a human being to come to harm.
  • Second Law: A robot must obey orders given it by human beings, except where such orders would conflict with the First Law.
  • Third Law: A robot must protect its own existence as long as such protection does not conflict with the First or Second Law.

These laws would be encoded into the AI controlling all robots at their inception and would be sacrosanct. This idea escaped the creators of the Terminator series robots whilst in Bladerunner, the manufacturers of the “replicants” had to build in auto-destruction of their products after a small number of years lest their self-learning robots get too big for their boots and turn on their creators – sensibly too, replicants were not allowed on Earth but only sent to do the typical jobs of slaves, out in space as miners, soldiers, builders – dirty, dangerous jobs. Ultimately, the Replicants in Philip K. Dick’s story, show us something about what it is to be human ( as all good science fiction does, if only because it is written by humans), at the end of the story, (spoiler alert) the human Bladerunner, charged with tracking down and destroying renegade Replicants who have made it to Earth in pursuit of getting their lifespans extended, is being dangled over the edge of a roof by the leader of the group of Replicants. Rutger Hauer, the actor who plays The Replicant Roy Batty, whilst dangling the Bladerunner, makes a speech which has become known as the ”Tears in the Rain monologue” as follows:

“I’ve seen things… seen things you little people wouldn’t believe. Attack ships on fire off the shoulder of Orion bright as magnesium… I rode on the back decks of a blinker and watched C-beams glitter in the dark near the Tannhäuser Gate. All those moments… they’ll be gone.”

Moments later he lifts the Bladerunner to safety just before his brain self-destructs – talk about saved by the bell! But of course, the Bladerunner -played by Harrison Ford, was saved by the robot/ replicant, perhaps out of mercy (a very human trait which the replicant might have developed) but perhaps in the hope that the Bladerunner might respond by trying to change things for Replicant To me this gets to the crux of being human – just when we are getting the hang of life, out time is up…

Incidentally, Rutger Hauer, not liking the lines that had been written for him, and with only a few minutes of shooting time left, famously improvised the “Tears in the Rain” monologue – kudos!

In our fears over the development of robots and AI, are we perhaps projecting our fears about our own human traits onto them, we aspired to create them to be our helpers, such as AI interpreting MRI scans more effectively than human beings, so why would we think that AI would want to destroy or enslave human beings (The Matrix)? Perhaps it’s because we human beings, given the chance, have all too often enslaved other human beings, abused, exploited and been ready to lynch them at the first sign of independence, let alone revolt. Do we need to take heed of the results of science fiction’s thought experiment warnings – of course we do! We should no more allow the unregulated development of AI any more than we would allow our children to play with loaded guns, but those regulations would be to keep in check the humans who would use AI, or rather misuse it. We should watch out for governments who want to create AI-powered weapons or control their people with ever more efficient propaganda – even before AI we are struggling to know what is true in the news. We have plenty to worry about in the humans – let alone AI and Robots. However, between regulation, commonsense, and perhaps most of all. the fact that AI might have learned lessons from the human mistakes that are messing up the planet, might we be pleasantly surprised to find that AIs assist us as they were intended to do, solving problems of the environment, working out how to operate an economy not based on permanent growth and war – might we be headed for Iain Banks’ “Culture” series rather than “Terminator”…

So one final sci-fi thought experiment from the great Marge Piercy – “He, She and It”, which goes to the question of what would happen if we could achieve hard AI and make it small enough to fit into an android-style (human shaped) robot. In an environmentally post-apocalyptic world, a woman is given the task of socialising just such an android, because Marge Piercy imagines that true sentience, as opposed to a glorified search engine, would require teaching and guiding as to what sense to make of the world – rather like a child. If she is right, then the great leap forward to hard AI, true sentience, would not be a runaway Terminator scenario, but a chance to imbueAI with the best qualities of human beings rather than the worst. Running through the book is the story of The Golem – a man made from mud and brought to”life” by Cabbalistic magic in order to protect the ghetto and in this story is that familiar warning of the dangers inherent in creating powerful androids which has echoed through speculative fiction ever since Mary Shelley’s “Frankenstein”… “He, She and It” is such an amazing book that I am surprised that it has not been made into a movie…

So what do you believe will be the future of AI and Robots -slave, nemesis of the human race or willing and able helper?

And so lastly, to – as pernicious a law as humans could devise, The Rule of Thumb – another phrase that has disputed meanings. Some people imagine the Rule of Thumb to describe a readily available way of measuring things – an Inch is approximately equivalent to the breadth of the top joint of an adult thumb, but a ruling by Judge Francis Buller in 1782, allows that a man could legally beat his wife, as long as he used a stick that was no thicker than his thumb – see here. I suspect that those who favour the measuring theory and disparage the beating one, are men…

This Gillray cartoon of Judge Buller from 1782 shows ‘Judge Thumb’ selling sticks for wife-beating Bridgeman

The Cant languages from the Wikipedia article for the letter “R” are: