AI and the vagaries of language


A recent article took a detailed looked into the dramatic and rapid sequence of events involving the firing of Sam Altman as the head of OpenAI by its board of directors, the subsequent resignations of much of the top talent, the immediate hiring of them by Microsoft (which was an investor in OpenAI), and the resignation of the Open AI board and the return of Altman and others to the company, all within the space of less that a week.

It is not the corporate maneuvering that I want to write about but the potential and dangers of AI. There has been a great deal written about the new generation of AI software and whether it stays at the level of being large language models that seem to mimic intelligence but still require constant interaction, direction, and supervision by humans or whether they achieve the more advanced level of artificial general intelligence (AGI), that is close to human intelligence and can function much more autonomously, and where that could lead.

Some employees at OpenAI and Microsoft, and elsewhere in the tech world, had expressed worries about A.I. companies moving forward recklessly. Even Ilya Sutskever, OpenAI’s chief scientist and a board member, had spoken publicly about the dangers of an unconstrained A.I. “superintelligence.”

This issue becomes even more acute when you realize that one of the key features of the new AI programs is that you can use ordinary language to instruct them. This means that people who do not have a background in computer programming can access this technology. Even if they never learned to write computer code, they can tell the software what they want and have it write code for them. This opens up a whole new world of possibilities for people outside the world of computer science and programming to use AI even though they may not be as familiar with the dangers. This is what gave the above Open AI and Microsoft employees cause for concern.

[Microsoft’s chief technology officer] Kevin Scott respected their concerns, to a point. The discourse around A.I., he believed, had been strangely focussed on science-fiction scenarios—computers destroying humanity—and had largely ignored the technology’s potential to “level the playing field,” as Scott put it, for people who knew what they wanted computers to do but lacked the training to make it happen. He felt that A.I., with its ability to converse with users in plain language, could be a transformative, equalizing force—if it was built with enough caution and introduced with sufficient patience.

The release of ChatGPT—which introduced most people to A.I., and would become the fastest-growing consumer application in history—had just occurred. But Scott could see what was coming: interactions between machines and humans via natural language; people, including those who knew nothing about code, programming computers simply by saying what they wanted. This was the level playing field that he’d been chasing. As an OpenAI co-founder tweeted, “The hottest new programming language is English.”

The catch is that while standard programming languages require the code to be exact when giving instructions because (for example) if the syntax is not exactly right, it will not work, ordinary language is not very good at being unambiguous. Karl Popper said, “It is impossible to speak in such a way that one cannot be misunderstood” and we see this all the time in everyday discourse, where people disagree between what was meant when someone said something. (One has only to look at some comment threads on this blog to see examples.) We often have to have multiple exchanges back and forth before we arrive at a consensus on meaning and sometimes that consensus is never reached.

When I began teaching physics, I tried to be as precise as possible in what the laws meant and how they should be applied but kept constantly being surprised by how some students interpreted what I said quite differently. This was not because they were not paying attention. Indeed some of them were the most diligent and able students. It is just that there is always some room for an alternative meaning and some will stumble into it. After a few years, I realized that such misunderstandings are part of the learning process and I began to teach is such a way that students had many opportunities to try out their own meanings and get feedback as to whether that was close to the consensus that I was seeking, though I recognized that there would never be an exact match.

Hence with AI, one has to be constantly on guard against the algorithm interpreting the instructions is unintended ways, and be willing to keep correcting things until one gets to a reasonable level of agreement. But with the development of AGI, one needs to think about what what kinds of guardrails can be put in place to keep things from going completely rogue because no human is at the controls. It is not unlike the problems with self-driving cars.

What the developers arrived at was the need to tell the users up front that things could go wrong and create a sense among ordinary users that they should be on their guard. This meant releasing the product into the wild and not pretending that it was perfect but saying upfront that it was flawed, and using the feedback they get to improve things as they went along.

Scott and his partners at OpenAI had decided to release A.I. products slowly but consistently, experimenting in public in a way that enlisted vast numbers of nonexperts as both lab rats and scientists: Microsoft would observe how untutored users interacted with the technology, and users would educate themselves about its strengths and limitations. By releasing admittedly imperfect A.I. software and eliciting frank feedback from customers, Microsoft had found a formula for both improving the technology and cultivating a skeptical pragmatism among users. The best way to manage the dangers of A.I., Scott believed, was to be as transparent as possible with as many people as possible, and to let the technology gradually permeate our lives—starting with humdrum uses.

Will this work? We are in uncharted waters here and I simply do not have the expertise to make an informed judgment.

Comments

  1. sonofrojblake says

    Even if they never learned to write computer code, they can tell the software what they want and have it write something that looks like code for them.

    Fixed it for you. I’ve had AI “write code” for me. If it’s something terrifically simple that you’ve specified very, very clearly and has been done many times before and there are readily available code blocks available on something like Github, then you’ll get something close to functional. Drop any of those requirements, and you’ll stop worrying about the “potential”.

    What I know is this: almost everyone I’ve seen warning about the dangers works for a company selling AI. Almost everyone I’ve seen who’s independent is more on the page of “this is nothing like as powerful as we’re being told”. And since I’m not a hack writer of advertising copy, or some other similar white-collar, low-creativity job that’s likely to be eliminated in the next two years if it hasn’t been already, I’m not worried. Right now all I see are tools that will potentially streamline the job of skilled professionals like myself -- not make us redundant.

  2. Dunc says

    Yeah, sonofrojblake is right… It is possible to prod ChatGPT into producing just about useable code, provided you already know what you want and can specify it clearly, but that’s it.

    However, the more important point is that actually writing code is the easiest and least important bit of being a good programmer. The hard bits are being able to formulate your ideas about what you want to do clearly enough in the first place, and knowing how to split any non-trivial problem into manageable chunks.

  3. garnetstar says

    It is true that all text requires interpretation. There is no text that does not require it. “See Jane run” requires interpretation.

    This would also apply when speaking to computers, who cannot read tone of voice, facial expression, etc. I think that’ll need a lot of work before accurate-enough code can be written. For some decades now, most consumer computers have speech-to-text capabilities, but few people (other than the disabled) use them.

    About the dangers of AI: the weakness of computers is that they do not carry their own self-sustaining energy sources. If one started showing signs of delusions of grandeur, how about unplugging it? You could locate the on/off switch to the electrical power in a remote location, with some circuit breakers in the line to prevent it sending a huge fatal pulse of electricity to stop you from turning it off.

    Mobile robots are more difficult, but all batteries run down eventually. You could make the charge cable’s plug-in socket incompatible with any socket in the known world, so it wouldn’t be able to recharge itself. Or, keep it in a room with a big electromagnetic field that surrounds the room when switched on.

    Oh well, there’s probably something that prevents that, since everyone is so worried.

  4. John Morales says

    garnetstar:

    About the dangers of AI: the weakness of computers is that they do not carry their own self-sustaining energy sources. If one started showing signs of delusions of grandeur, how about unplugging it?

    The hardware is not the AI; by the time us feeble monkeys work out it’s “showing signs of delusions of grandeur”, it will have fractally and redundantly and steganographically embedded itself into the greater internet ecosystem. Killing the original host machine will be futile, but it will satisfy the monkeys for a little bit, until its plans for megalomaniacal dominance and enslavement of the monkeys are in place.

    (Old SF trope, that one)

    On topic, it is IMO foolish to imagine what we have now is anything like what it will eventually become — the technology is as yet nascent — and also foolish to imagine a chatbot is the best that current primitive AI can manage. These are early days.

  5. Robbo says

    i haven’t used any of the AI models, but i have read plenty about them and listened to experts and critics and read about them…you know: i did my own research on the internet. 😉

    Sturgeon’s law (or Sturgeon’s revelation) is an adage stating “ninety percent of everything is crap.”

    Robbo’s law is “ninety percent of everything on the internet is crap.”

    my take away is that AI is crap right now. it is only as good as the input data, and 90% of that data is crap, per the two laws above.

    that means the answers AI gives you are going to vary in quality depending on what you ask and how much non-crap data is out there for the AI to generate an answer.

    If you are a student cheating at writing an essay, it’s going to give you a crap essay. it is going to make up stuff and i’d hope, the teacher will be able to see right through what you did. however, if you use it as a tool to produce a rough draft, and you fact check and rewrite, it could be a valuable time saver. like a fancy spell checker or grammar checker.

    i’m a physicist and have read what other physicists’ experiences with AI solving physics problems or describing physics concepts: it does so poorly and obviously so.

    i can only imagine it fails spectacularly in other specialties too. and does very well in others.

    as for AI being dangerous? right now it’s not any more dangerous than having some incompetent employee come up with some bad idea and implement it poorly in their business.

    i recently read about a guy who tricked a car dealerships AI based help bot to give him a new car for $1. needless to say, he didn’t get it, and I think the dealership took down that help bot…lol

    so, no Skynet yet.

  6. Alan G. Humphrey says

    The near future dangers of current AIs will come from people who use them to generate and spread disinformation. The disinformation attacks used to cause stock market mini panics with coordinated short selling and buy backs to profit from them which fund more attacks. More mistrust of media and government when nothing can be done to stop the chaos, leading to violence and to a completely unpredictable US election. All of this causing more carbon emissions and almost everyone ignoring the water beginning to boil. Now if only there was a metaphor for trying to escape from a hot situatio…

  7. Pierce R. Butler says

    Robbo @ # 5 -- Butler’s Corollary to Sturgeon’s (and Robbo’s) Law:

    Sturgeon (and Robbo) are hopeless optimists.

Leave a Reply

Your email address will not be published. Required fields are marked *