Explaining What’s up with AI (Part 6 – The Dark Side)

Part 1 of the series is here.

AI like any new technology brings with it both promises of great benefits and also many concerns about how it might cause bad outcomes. Since there’s plenty of people who are already trying to get rich by selling the idea that AI will solve all our problems, I think it’s worth spending some time discussing some of the concerns about how these technologies will affect the future in ways that aren’t so rosy. 

Safety

When tools like ChatGPT took off,  people raised various concerns about their safety. Will people be able to ask them for information on how to do bad things like build a bomb, or hack into computers? There was another set of concerns around privacy. if the AI knows everything then maybe it will divulge sensitive information about people or organizations. A third concern is bias, does the AI give fair or impartial answers to difficult or controversial questions. 

For most of these concerns, it seems like the big AI companies are at least making efforts to minimize bad outcomes. It’s not going to be perfect, there will be examples of failures but it’s essentially the same thing as with moderation and filtering on the internet in general. Real harms will occur and people will always complain about censorship, bias, or the promotion of hate speech etc. 

Even more concerning than just people using ChatGPT to find out bad things, is when these tools are built into various processes that directly affect our lives. Mortgage approval, Job Applications etc.  It’s tempting to think that computers will be more accurate or fair than humans in the loop but that’s not likely to be the case, especially when as we have seen previously, we don’t know enough about how these tools even work. We know they will reflect their data, but we don’t always know enough about the data that was used to train them and what biases or other pitfalls might be in that data. We aren’t set up to properly regulate or audit these systems.

A lot of these concerns were brought up several years ago in this great book by Cathy O’Neil. The paper on Stochastic Parrots continues the discussion of many of these concerns enough that it got people fired over its publication. While it is technical, It’s actually pretty readable if you made it this far in this series.

Accuracy 

Related to safety and worth its own section is the concept of whether or not the results we get from AI are accurate. How much can we trust what we get out of these systems? As mentioned before people are concerned about when they “hallucinate”.  Its not clear this is fixable according to this post by Karpathy:

I always struggle a bit when I’m asked about the “hallucination problem” in LLMs. Because, in some sense, hallucination is all LLMs do. They are dream machines.

We direct their dreams with prompts. The prompts start the dream, and based on the LLM’s hazy recollection of its training documents, most of the time the result goes someplace useful.

It’s only when the dreams go into deemed factually incorrect territory that we label it a “hallucination”. It looks like a bug, but it’s just the LLM doing what it always does.

So at some level we can never fully trust them but it’s also true that they seem to be correct a great deal of the time. How can we learn to better deal with these kinds of situations?

One thing we can do is try to form a better understanding of what it means to be accurate.  Let’s go back to our example about detecting a cat in an image. What exactly do we mean when we say that a system can predict this with high accuracy? There are at least two types of ways we could be wrong.

  • Type 1 [False Negative] – There is a cat in the image but we don’t detect it.
  • Type 2 [False Positive] – There is not a cat in the image but we say there is.

In machine learning systems there’s always going to be some error, but the cost of Type 1 and Type 2 errors might be very different. Let’s consider that instead of detecting a cat we are screening patients for cancer. We want a test that is very unlikely to make Type 1 mistakes. We don’t want to miss any potential cancers. We might tolerate some Type 2 mistakes if we can then follow up with more accurate but potentially more expensive or invasive tests.  In the future, we will all see more and more reports about AI systems that are 90+% accurate but we really need to understand more about what that accuracy actually means. Unfortunately, neither our education system nor our news organizations are well set up to help here.

Displaced Workers

Much of the initial concern about AI was around worker displacement. Papers have already been written that purport to show declining wages for freelancers. Others are predicting that computer programmers will soon be out of work. If you are a business person you might see this as a great opportunity for growth. If you are a human person you might be worried about your job.

This is a huge deal but it’s also pretty early to be able to say much about it. We may or may not see whole job types go away, but it does seem clear already that knowledge/office workers will need to develop the skills for working with these tools in order to be seen as competent and productive. 

Deep Fakes

This is a really tough section because there are many negative outcomes but it also feels like we can’t easily predict or mitigate them. We are all just going to have to try to learn to deal with it better and we might not be capable of doing that.

Deep Fake is the term used to describe high quality fake video or audio clips where it can appear that a person is saying or doing things they never actually did. At first these were expensive to create and easy to spot, but the tools keep improving. We are now in a situation where it’s pretty easy for anyone to create a deep fake of anyone they want even if they only have a small amount of digital information about the target (a few seconds of audio and a few photos). This can even happen fast enough to be able to impersonate someone live on a phone or zoom call.

A lot of attention has been given to fake videos of politicians saying controversial things but we may all become numb to that soon enough. We have already begun to adapt to a “post fact” environment for better or worse. A bigger concern is all the other ways this could be employed by bad actors. Here’s just a few to think about:

  • If someone can make a deep fake of you, what trouble could they get you in?
  • How easily could scammers prey on vulnerable people if they can sound and even look just like a trusted loved one?
  • What kind of imagery could some make about your children and who could they share it with that will hurt them?

We can eventually develop ways to authenticate the origins of videos and we will slowly learn more about what we can and cannot trust, but are we actually evolved enough to learn to ignore things we have seen even if we know they are fake?

Loss of Agency

There’s a romantic story we tell about how factory work destroyed the artisanship required to create various items such as shoes, furniture, or clothing. Rather than craftsmen, humans in factories were reduced to one step above the machines they tended and the resulting artifacts are sterile and lack authenticity. The whole story is more complicated but I think it does motivate us to consider what might be lost in a world where we outsource more and more of our creative work to systems such as ChatGPT or Dall-E.

For a lot of people, writing is part of their thinking process. It is in the process of writing and editing that we can sharpen our thoughts and it often results in new insights. It’s easy to think you understand something before you have to write it down in detail. If we delegate most of this work to ChatGPT we may just accept what it echoes back w/o really thinking about it. For software development it doesn’t seem great that AI will allow humans to build complex systems that they don’t understand.  I’m not an artist but I can imagine that there are similar issues here for people who create images and music.

Social media has already had a huge impact on our attention spans and caused us to do more shallow thinking. Let’s hope that AI won’t accelerate that trend.

We’ve also seen a lot of examples of how AI agents will be able to work for us but I think we need to be careful. I wrote more about this here but essentially if the AI is going out there and making decisions, they do this in our best interest but rather in the interests of the people who build the AIs.

You can also tell a positive story here. Some people will find that engaging in a form of socratic dialog with an LLM will help them to learn more quickly or understand more deeply. They can learn and grow their skills and agency in a safe non judgmental environment. People are building these AI enhanced education systems already so maybe there is some hope.

AGI and SuperIntelligence 

What happens when these systems get even smarter? I wrote more about these topics here, but the main points are below.

While I’m unaware of any complete and precise definition, scientists generally use the term AGI (Artificial General Intelligence) to refer to systems which are not just good at some narrow tasks, such as generating images or working with text, but have a more general intelligence on par with humans. Many believe these systems are only a few years away and that they will revolutionize our relationship with computers, work and the world around us. If most/all workers could be replaced with AI systems (possibly with robotic appendages) then what would that mean for the future of humanity? Even if they start out being expensive, we are good at optimizing things and there would be enormous potential for worker displacement.

Technology advances largely via two processes, optimization (making things better) and innovation (coming up with new ways to do things). If AGI only requires optimization from where we are now, it could happen pretty soon. If real innovation is required, it might take much much longer than people think. My personal view is that we are likely at least 10 years away from real AGI, but there are people smarter than me on both sides of the argument.

There is a subset of AI optimism that also thinks that once AGI is achieved, we will see its power rapidly accelerate to become some kind of SuperIntelligence that we can’t even imagine and we may not be able to control. The argument goes something like: once we have a computer system that is as smart as us, then it will be able to help us make it even smarter and this smarter AI will make an even smarter one and the process will accelerate. Some really smart people work on something called Super Alignment which tries to figure out how us lowly dumb humans will be able to keep this SuperIntelligent AI from doing things which aren’t in our interest. 

For what it’s worth, I don’t subscribe to this rapid acceleration idea. I think there are going to be real obstacles for any intelligence to grow rapidly and it’s going to require innovation in new ways of thinking which might never be discovered. For now, I don’t care to focus on things which might happen someday when there are enough clear and present dangers as defined above.

Should you worry more about an uncontrolled AI taking over the world, or people who want to take control of the world using AI to do it?

Key Points

  • AI systems have many of the same content moderation problems as social media. As we build AI into automated systems, we run significant risks of silently enforcing bias and other toxic outcomes
  • In order to better understand how accurate an AI system is, we need to think more deeply and clearly about the kinds of accuracy we want.
  • AI systems will displace some workers and require even more workers to become more skilled at using these tools
  • Deep Fakes will cause a lot of harm and panic for a while unless and until we  how to deal with them
  • Using AI to do our work for us, involves some real risks to our ability to think deeply and understand complex system if we don’t consider our tools and processes carefully
  • AI is going to get smarter and there could be even more risks when it does, but we have plenty to worry about now with finding the best ways to use this technology with minimal harm.

Posted

in

by

Tags:

Comments

Leave a Reply

Your email address will not be published. Required fields are marked *