Response to Dr. Collier on AI


I obviously watch a lot of youtube video essays, so I frequently get recommended thinkpieces about the problems with AI. And I don’t watch them because I am literally a professional in the field and I don’t need some vlogger to ramble at me about something I generally understand better than they do. But I watched one of these videos, and I disagree on some points, and now you’re going to hear about it.

The video is “AI does not exist but it will ruin everything anyway” by Dr. Angela Collier. It is not necessarily the best example to highlight (it’s an hour of rambling, I respect that most readers will not want to watch that), but it’s the one I watched, okay? I’m going to structure this as a list of items, starting with the most fact-based items, moving towards my more contentious opinions.


(The title of each item asserts my own viewpoint, not Dr. Collier’s.)

1. AI does not exist

One of Dr. Collier’s central claims is that AI does not exist. I agree. Within academia, there is a field of research called “AI”, which is a supercategory that includes machine learning, which is a supercategory that includes deep learning (a hierarchy shown by Dr. Collier at 0:12). Machine learning refers to algorithms where some of the parameters are not human-written, but are “learned” by training the algorithm on data. AI includes non-machine learning algorithms, such as explicitly programming the winning strategy to tic-tac-toe.

However, when people hear “AI”, what they imagine is something narrower than even deep learning. They’re imagining what might be called “artificial general intelligence”, or AGI. AGI isn’t a well-defined concept, but generally it’s an algorithm that is able to learn and accomplish a variety of tasks like a human would.

The important fact about AGI is that it’s hypothetical, it doesn’t exist. ChatGPT is not AGI. When we describe ChatGPT as AI, many people imagine it as AGI, but it’s not, it’s not AGI. ChatGPT seems able to perform a variety of tasks like a human would, but in truth it is just like machine learning algorithms that came before. It is trained to perform a specific task: generate text similar to a human. I do not think we will create AGI soon, but I don’t know the future.

Now you might get the impression that the general public plays fast and loose with the word “AI”, while experts in the field are more careful about it. As a working data scientist outside of academia, I am here to tell you that’s not true. Data scientists are also guilty of playing fast and loose with the terminology, basically for marketing purposes. Whatever we were doing before, that’s not AI, and what we’re building next, that’s AI. It doesn’t matter because “AI” is meaningless, and is a marketing term.

2. Machine learning is already used extensively in underwriting

Dr. Collier claims (28:31) that when banks give out mortgages, they’re required to have a human check the application. I work in this industry, and this is completely untrue. She puts up an excerpt from a CFPB press release about it, but it does not support her claims.

In the US, lenders are legally required to explain why they declined an application for credit. That does not prevent them, either in principle or in practice, from using machine learning models to make decisions. In the status quo, lenders commonly use “glass box” models like logistic regression, which are supposedly easy to explain. Currently, they are moving towards “black box” models like gradient boosting, which are supposedly harder to explain. The CFPB press release declares regulators’ suspicion of black-box models, and asserts that lenders are still obligated to provide an explanation.  Lenders cannot just say “these models aren’t explainable”.  So lenders are using a variety of methods to explain black-box models, and it remains to be seen which methods regulators will find acceptable.

Nowhere does the CFPB say that you need a human to check the application. In many established methods, machine learning is already used, and humans aren’t involved in deciding individual applications. Human underwriters are used in some cases, but can be a liability, because humans are black boxes that can produce inaccurate explanations, unlawful discrimination, or outright fraud.

Later in the video (50:06), Dr. Collier speculates about AI messing up someone’s life because they can’t get a bank account because the algorithm found a fraudulent person with the same name from before they were born. This is so far off the mark, I don’t even know where to start. Underwriting is fundamentally about making probabilistic predictions, regardless of your underwriting method, and often the majority of individuals who are declined would have been fine to approve.  At the same time, the particular failure Dr. Collier imagines strikes me as incredibly unlikely.

3. AI’s impact on jobs is uncertain

One of Dr. Collier’s main points is that AI is not accurate, and will not replace workers. However, she thinks some companies may try to replace workers with AI, and then when they find out the AI isn’t very good, they’ll hire people back as contractors or with lower pay.

I think that sounds about right, but strikes me as overconfident. As established above, machine learning is already used in underwriting, and arguably does it better and more fairly than humans. Did this cause underwriters to lose their jobs? Well, maybe, but there’s still a lot of human expertise going into it, just not on the individual application level. And they don’t use ChatGPT.

In general, machine learning has an extremely difficult time surpassing experts, except where it is possible to generate large quantities of unambiguous ground truth. For instance, it is easy to verify checkmate in Chess, and that makes good chess algorithms possible. Lending also has unambiguous ground truth (did the borrower pay it back or not?). ChatGPT and the new wave of “AI” do not fall into this category. ChatGPT is impressive precisely because it addresses a poorly-defined problem–and for that reason, I do not expect it will surpass human experts.

My personal experience using ChatGPT to assist in coding or writing is, it’s practically useless. There are several aphorisms about how it takes 10% of the time to get 90% of the way there, and the other 90% of the time is spent editing/debugging. I feel like ChatGPT gets me like 75% of the way there at best, which doesn’t save me any time at all. But I’ve heard people say it helps because English is a second language to them, and okay I can’t begrudge that.

Some companies will try to replace workers with ChatGPT or similar, and I suspect this won’t work out for most of them. Some will learn that 75% is not good enough. Other companies may find that 75% is good enough for their needs, but that means customers can just skip the middle man and go straight to ChatGPT, or even some open-source LLM. I think that’s the real threat to jobs, the new DIY competition.

Arguably companies should be hiring more experts to differentiate themselves from the competition–but that may be overly optimistic of me.

4. AI replicates failures of human intelligence

Dr. Collier points out many issues with AI that are obviously also problems with human intelligence. In some cases, she explains the problem by first describing how humans make the same mistake. I agree, but I think it’s worth further highlighting the comparison. Many human cognitive biases don’t really have much to do with humans, they’re fundamental limitations in learning. AI and machine learning will replicate some of the same problems because AI is not magic.

One of Dr. Collier’s examples is skin conditions. Skin conditions may present differently among people of different skin color. But since doctors are mostly trained with photos of White people, this may cause problems when diagnosing the same conditions among Black people. This is a human problem. It is also a machine learning problem, because machine learning is also based on looking at photos. There are methods to de-bias diagnoses, but they’re difficult to do, in both human and machine learning.

Another one of her examples is cat classification. She describes how you might try to teach an algorithm to recognize even artistic representations of cats, such as stuffed cat toys or glass cat sculptures. But then the algorithm might make an error, like classifying a glass beaker as a cat (5:10). Dr. Collier presents this as a mistake only a computer would make, but I think humans can and do make similar mistakes all the time.

You can read about this problem in the Stanford Encyclopedia of Philosophy under the header “ostensive definitions“–defining things by pointing to examples. We like to think of ourselves as actually understanding definitions conceptually, but (arguably) most of what we learn ultimately comes from ostensive definitions. Don’t get me wrong, machine learning is clearly inferior to human learning, but this particular problem isn’t caused by AI inferiority, it’s caused by the fact that AI isn’t magic. It may be worth understanding that human intelligence isn’t magic either.

5. Bad art is not immoral

One of Dr. Collier’s minor points (32:57) is that AI art sucks and we shouldn’t be forced to look at it. Earlier this year I wrote a series on AI art, but I’ll restate my relevant opinions.

I don’t care much for AI art myself. However, the same principles apply in AI art as with other kind art. Bad art… is fine actually.

When it comes to art, there’s a range of commerciality, from the megacorp-produced art like blockbuster films, to small indie studios, to solo artists hoping to build a career, to pure hobbyists. Art you see for free on your news feed, and AI art in particular, skews towards the non-commercial end. Non-commercial art is systematically “bad” (and I say this as a hobbyist with confidence in his own art) because people don’t invest as much time and money into it, or don’t have as many eyeballs to give feedback, or they’re catering to their own niche interests rather than yours. AI art generators are tools people use to save on cost and time, so it’s primarily hobbyist territory at the moment.

You don’t have to like the art. But “bad” art being shoved in your face is not actually a problem with AI, it’s a problem with art, and whatever algorithm you use to find it. Artists are not obligated to please you. If you’re seeing art that you don’t want to see, for free on your news feed, excuse me but you’re not paying those artists to please you.

6. AI trained on public data is not clearly wrong

I put this last because I know it’s the point where people are most likely to disagree with me. ChatGPT and AI Art generators are generally trained on “public” data, meaning stuff they scraped from the internet. This has people concerned about potential plagiarism or theft, laundered through an opaque algorithm.

I think the concern is legitimate, but my counterargument has been that human artists learn in a similar way, by looking at examples. This method can lead to accidental plagiarism, for both humans and machines (see: neither machines nor humans are magic), but that doesn’t make the method wrong. I’d like to see an empirical study of how often AI image generators producing images that closely resemble something in their training data, because I think it’s a potential issue, but not necessarily so.

Dr. Collier uses an example from music (31:24), and I think it’s a particularly contentious example in light of recent events in the music industry. Namely, heirs of Marvin Gaye music successfully sued Ed Sheeran for producing music that was inspired by Marvin Gaye. Dr. Collier claims that if you ask an algorithm to write a song in the style of Prince, that would be a problem.  But the fact is, that is how music is commonly practiced even by human artists, that’s how musical genres work. Commentators like Adam Neely have been saying that the copyright system is broken in music, and does not account for how music is actually practiced.

I don’t have an opinion on how to deal with copyright in music, but my point is that there are domain-specific issues, and maybe let’s not dive head-long into arguments in favor of strengthening copyright.

Concluding remarks

I first found Dr. Collier through her video on Gell-Mann Amnesia and variants. Gell-Mann amnesia is when you realize that someone has made a lot of mistakes when commenting on a topic that you are familiar with, but forget about it when they move on to comment on other topics that you are less familiar with. She also discussed the complementary problem where people are excessively nitpicky on topics which they are experts on. The relevance to this blog post has not escaped my notice.

I’m being nitpicky on several points, I admit. And I don’t think that’s a bad thing. My purpose isn’t to pass judgment on Dr. Collier, it’s just an excuse to talk about “AI” and provide my perspective.

The one point that is more than just a nitpick, is about the use of machine learning in underwriting. Dr. Collier clearly misunderstood the CFPB press release. I understand critics want to deflate hype about AI, and I generally agree with the project. However, one of the reasons AI is overhyped is because in a sense it’s not really new. Machine learning has been in practical use for some time already.

Comments

  1. says

    One thing I notice is that when media figures weigh in on AI, they usually say something (in effect) “I haven’t actually used any AI and don’t know anything about it, but here’s an expert I found who shares my opinion, and I’ll get them to explain things as they see it.” “On the Media”‘s Brooke Gladstone, I am looking at you.

    The most annoying bit is regarding large language models, but also applies to the art generators, namely someone saying “yes, but only humans can be creative.” Uh, obviously they have not actually used or thought about how creativity works, and they certainly lack any kind of theory that would argue why only humans can be creative. I had a dog who was actually pretty creative. And there are creatures like bower birds that seem to me to be more creative than a lot of pop stars. Even the mighty Noam Chomsky has waded in, not apparently having had a conversation with any LLM, saying that they are just glorified autocorrect. Yeah, is that the best science can do? Seriously?

    AGI seems like an interesting problem, probably quite solvable by superimposing a number of specialized networks in generative/adversarial chains. To do that, the issue (for now) is performance. We probably expect our AGI to be real-time or near real-time and it’ll be immersion busting if the AI wants a minute or two to “think.” And by “think” I mean create candidate outputs and run them against some adversarial filters and pick the best match. Which is what it seems to me that humans do, we’re just doing a lot of this stuff subliminally (because being deeply aware of how we think would be pointless and distracting).

Leave a Reply

Your email address will not be published. Required fields are marked *