Philosophy and Beyond

Philosophy and Beyond

Home
Philosophy
Japanese Philosophy Course
Interviews
Derrida
The Philosopher’s Browser
Philosophy Contest
Leaderboard
About

Share this post

Philosophy and Beyond
Philosophy and Beyond
AI, Social Media, and the Search for Quality

AI, Social Media, and the Search for Quality

Interview with an IT Specialist

Romaric Jannel's avatar
Romaric Jannel
Jan 17, 2025
19

Share this post

Philosophy and Beyond
Philosophy and Beyond
AI, Social Media, and the Search for Quality
12
17
Share
Cross-post from Philosophy and Beyond
It was an honor and a pleasure to be interviewed by Romaric – it gave me a chance to think about some of the important aspects and consequences of AI and social media (two favorite topics of mine). -
Jorgen Winther
a computer chip with the letter a on top of it

Romaric: Thank you for accepting my offer of an interview. I found the discussion we started on Substack interesting and thought it would be nice to share your views with my readers. So, thanks again. Could you please introduce yourself and the question you are interested in?

Jørgen: Hi Romaric, thanks for your interest in my thoughts. My name is Jørgen Winther. I was born in the late 1960s, in Denmark, and I have had what I would call a rich life. Not in regard to money, but I have experienced many things, not feeling locked to one particular way of living or thinking. Educated as a Datamatician (business and process oriented computer science), specialising in AI. This, after an initial but interrupted engineering study and some thoughts about studying journalism.

I thought that I was interested mostly in electronics, computing, and technology. Probably as a result of the time I grew up in, but also because these were areas full of insight and development. But I sensed a need for questioning things rather than just accepting all the facts of the technological world, and that led me to a bunch of additional studies and interest during my life, including several languages and a study of Eastern Europe with a focus on Russian language, politics, and literature.

I studied systems thinking once, finding that it was very much in line with my way of seeing the world — everything is connected, and the world exists mostly in the sense of these connections. I also have a strong sense of humanism, believing that there is value in every human.

What matters the most for me currently is the big connectedness we are taking part in, with social media and other structures and ideas that force people to be interested in virtually everything that happens — which is too much, causing a resignation towards a lot of it and an increasing tendency to turn towards a smaller world where many details, many people, many ideas, and a lot of history, have all been cut off. The idea of living in the now, being interested in “my own happiness, first of all”, but the lack of ability to find that happiness.

It makes people seekers, looking for someone to follow. At times, they pick a good one, at other times someone who is less good for them and the world. Every young person today is following a number of people on social media, making these influencers important drawers of life. As people are seeking, they are also constantly looking for truths, often simplified. Probably the reason for “memes” to have become a big thing during these years.

Romaric: There is a lot of discussion about social media and AI today. When we look at these discussions, the level of concern about these technologies is clear. However, most people are using social media and AI; consciously or not in the case of AI. What do you think about these technologies and do you feel that people's concerns are justified?

Jørgen: Using social media and AI, just like using the Internet, TV, etc., has become a standard way of life for most people in many of the world’s countries. It is not really a matter of using it or not, but how we are using it. Like I am happy that we have roads around us to travel on, but at the same time sad about the amount of them and their dominance in our lives, I also feel that social media have become a too present foundation of our way of life.

As an online writer, social media is a topic that I keep returning to again and again. I see several problems with it, mainly the gap between the expectations set up by the providers of the media and the influencers as well, and their users. Social media forms a place where we can both find what we are seeking in what others create, and where we can ourselves become creators and show our creations to others. However, the platforms are controlled and directed by their owners, while the spread of creations is governed by algorithms designed for purposes other than user happiness. The bulk of attention remains focused on the creations of major influencers, leaving the rest of us with the unfulfilled hope of one day achieving success on these platforms.

In a way, social media is a fantasy for most people. An artificial universe where we can search for what we believe that we are seeking, but we never really find this — instead, we find simple substitutes, shaped as cat videos, memes, and other quick and apparently rather pointless content. But in a complex world where most of us are tied to a job and other obligations that drain almost all energy out of us, it feels tempting to dive into this fantasy for a moment, to feel the rush of searching, at least, if not the joy of finding.

It is not all bad. We have a need for connecting with other people, as I wrote about recently in a post on Substack, “The Meaning of Notes.” Socialising isn’t about complex meaning, it is simply about hearing each other and being heard, using whatever simple conversation as a vehicle.

Same duality in value versus dominance as for the roads, but necessary, as we have moved into the Internet and out of the real world, often not seeing other people in a real context, as much as we used to. Social media is stealing our attention and time, thereby preventing us from spending these things on real life social activities, but giving us in return a feeling of being social. It is a bit like when egg producers put a fake egg in the chicken’s nest when taking the real one: the chicken doesn’t seem to notice the difference, it just wants something to be there.

For some people, this is good! Some people who wouldn’t function well in a social context in real life can now find a foothold in virtual life on social media. Others suffer because they have a hard time being accepted as a real person in this fantasy world. They would do better in a world where “social” was something real. I was occupied with this thought some 10 years ago, writing a text about it, “Service Society 2.0 — The Real Virtuality”, but I have seen almost no interest in this way of thinking. I guess it is because I can only show it to the wrong people; those who are hooked by the idea of potentially finding their happiness on social media, hence, are in a somewhat blind state of mind, being led by the promises of social media, not what they experience in reality. They don’t want to hear that they only succeed there if they wouldn’t in real life. They want to stay in the illusion of being on the way to become someone appreciated because of some inner value, they believe — hope — that they have.

Romaric: And how do you feel about the way AI is understood and discussed today? And what do you think about the euphoria that generative AI has generated and continues to generate?

Jørgen: AI has been "emerging" for a long time, as scientists have experimented and failed to find a good way forward for quite a while, until suddenly, a few years ago, we got working mechanisms that then rapidly spread into practical use.

When I studied AI in the beginning of the 1990s, it wasn’t at a very functional level. Some artificial neural networks had been put into use in specialised production environments, where they could help to identify different parts of an animal being butchered, for instance, but the bigger scope of use wasn’t practically possible at the time. Expert systems based on facts and rules were built but ended up failing, as they always lacked some ends of knowledge and some important connections.

We needed something else for AI to be useful in a wider scope.

I will not here get into all the details of what has been invented, but we now have practically working facial recognitions, for instance, we have way-finder systems that can be utilised in self-driving cars or warehouse robots, and we have generative AI that can help us assemble pictures or text that would otherwise have required a human being to spend a lot of time. 

Some of these working technologies are really helpful in the right contexts, when used right. But some of them are just as damaging when used in wrong ways.

Share

I am very critical towards the explosive popularity of generative AI, because it seems to be used only because it can save time and, by that, money. What was always an element of work in the old, non-AI days was quality control, and indeed a sense of wanting quality. In many cases, we now see a fascination with the technological possibilities that leads people to publish content that should have undergone some additional work to become good, but haven’t. I mentioned this recently in “AI Gibberish”. At other times, people are simply being cynical, knowing that they can send off something bad and unfinished and still be successful with it.

The big danger lies in the widespread implementation of AI in tools like word processors and imaging software that urges users to generate by AI rather than by their own hand. Companies are often led by people who do not understand the limitations of the generative AI, and when they see that everything can be done, seemingly, by the push of a button, they often decide that this must be doable in a fraction of the time people spent before — hence, the people doing the work, with or without the help of AI, no longer have the time to ensure a reasonable quality. And we do, indeed, end up with very much low-quality output.

It is even worse when people use AI to “summarise this email” and similar, meaning that they don’t read the full email themselves. This phenomenon extends into all areas of business life, and we are already often in situations where text has been written solely by AI, without a human eye checking it, and then later is being read solely by AI.

What’s the purpose? Why do we want to engage machines to talk to each other this way? The purpose of any business communication should be to convey a message from human to human, but it is being lost in the rationalisation process: the idea that we can save money by automation. People see it as similar to having a machine stamp out shapes from sheets of metal, rather than cutting these by hand, or other well-known productivity gains by automation, but it is really different when we talk about creating meaning.

So, all in all, yes, people’s concerns are justified by some serious problems with both social media and AI. Mostly, people should be concerned about the missing sense of quality, missing will to provide something useful to others, that have become commonplace with these technologies. It is not really a problem with the technologies themselves, it is a problem built into human nature. All tools can be used in good ways or bad ways.

Romaric: Here on Substack I have seen many posts that should have been written thanks to generative AI. I am thinking in particular of posts that refer to Nietzsche or some other well-studied philosopher and more or less repeat the worst, but easier to understand, interpretations of these philosophers. I am very concerned. Not because people are using generative AI, which can be a very useful tool, but because they are using AI to economize a personal journey into a difficult field; a way of learning that is more rewarding for both the writer and the reader. I would like to ask you how you use AI in your writing practice, and what you think are good and bad practices for using generative AI.

Jørgen: Generative AI has entered the writing world quickly and abruptly, suddenly being everywhere and almost out of control. I don’t use generative AI for writing, at least not directly, but there is AI inside many tools today, so anyone using a computer can hardly claim that they are not using AI at all, as it is not always clear if what they did involved AI. It can happen behind the facade, as part of any functionality in a software program or on a website, or even as part of a remote service being utilized.

What I have done deliberately with generative AI is, in a few cases, to generate some images. One of them can be seen illustrating my short story “Waking Up.” In most situations I would prefer hand-made graphics or photos, but I do use editing tools, both for text and images, that have AI-based features.

Tools for checking spelling and grammar, for suggesting the next word, etc., are also potentially based on AI. Logging in to the computer or smartphone uses AI for facial recognition. Calling or writing to the customer service of a consumer oriented company most often means that you’ll talk to a chatbot, again AI-based.

What I am trying to illustrate here is that we all use AI, whether we want to or not. I don’t like the widespread use of generative AI for writing articles. The main problem is that it is unengaged, showing a lack of interest in what is being delivered. If you don’t want to share your own thoughts, then why do you want to share anything at all? There are enough standard texts in this world, no need to use a machine to create even more of that. It can even cause you trouble, as I mentioned in an article, “How To Spoil Your Life With ChatGPT,” where I referred to a well-known real-life case of a couple of lawyers who used ChatGPT to prepare reference cases for a trial, and then it turned out that ChatGPT had made them up — they were not real, and the lawyers ended up referencing non-existing cases, documented by made-up court transcripts, etc. Fake documents aren’t popular in a court, so they got a lot of trouble out of it.

The big problem for many people who need to write as part of their job, is that their management and colleagues may not have any sense of quality, but they are in a hurry, so they will expect that AI is used, now that it exists. “Why waste time on writing anything by hand?”, they would argue.

During my life, I have met many people who didn’t like reading, and to whom text was just something that had to exist, but they didn’t care about what was in it, or how it was shaped. For such people, pushing a button to create that text is a blessing. As most texts published internally in a company aren’t read at all, ever, it seems to make good sense to think that way, but again: why then write the text in the first place?

When people are writing for leisure or for sharing ideas, like it is often the case on Substack, I would imagine that they want to do it because they like to tell what they think. For a philosopher, or someone interested in the field, it really makes no sense to let AI tell the story instead. It only takes away the opportunity that was there for sharing one's own thoughts. It is like having guests over for home-made dinner, only to serve them a frozen pizza.

People tend to trust AI-generated texts because they seemingly are well-written, but we need to learn about AI that it produces glitter, and not all that glitters is gold. Anything that expresses your opinions or builds on your knowledge should be written by you, not AI. 

Just one possible exception: In the same way as you could consider using a human ghostwriter, an editor, an assistant who writes down what you have spoken into a voice recorder, etc. — all such situations where you get assistance from another person to do part of the work when you yourself are a bit overloaded — you could possibly consider using AI for some of the same tasks. Just know that AI will not ask you questions to make sure that it really will be your thoughts that are written down, so you must yourself carefully edit whatever comes out of it.

I have been running and editing some publications on Medium, and seen several texts coming in that were definitely AI-made. In some cases, the presumed writer had altered the first few lines, or perhaps some in the middle, but the temptation to just post the text mostly as generated was obviously most often too big for them. I believe that this is a quite expectable human behavior, and for that reason I wouldn’t myself let AI write the text, risking that it would be published as fully or partly unedited AI text, perhaps expressing something else than I wanted to tell.

Maybe, in the future, I could consider asking an AI tool for ideas to improving the text, such as I know many writers do, but I would be very skeptical towards any suggestion; my experiments with such apps as Grammarly and Hemingway have not convinced me that they can make good suggestions all the way through — there will be bad ideas among the lot, and some of the “corrections” suggested will make the text outright wrong. Most significantly, they have been ignoring my writing style, and they have been poor at guessing what I actually wanted to say, and why I decided on a certain way of saying it.

Romaric: Thank you, Jørgen, for this conversation and for the elements you add to a debate that will likely continue to evolve for decades to come. Practices are likely to change and so will the perception of AI. We may be here to see it and add some new elements to the discussion! I warmly invite readers to check out Jørgen Substack Turning Life by Inidox.

Jørgen: Thank you too, it has been a pleasure. And yes, the future will contain a lot of AI for all of us: we must continue the discussion about it — to make sure that we will use this technology for good purposes only and develop ways of ensuring the human aspect and value of everything written.

Upgrade Your Subscription

19

Share this post

Philosophy and Beyond
Philosophy and Beyond
AI, Social Media, and the Search for Quality
12
17
Share

No posts

© 2025 Romaric Jannel
Privacy ∙ Terms ∙ Collection notice
Start writingGet the app
Substack is the home for great culture

Share