Comments

  1. Hemidactylus says

    Paraphrasing Leslie Knope, poets and English majors are “mean, conniving, rude, and extremely well-read which makes them very dangerous.” Don’t ever get into a verbal battle with one without a thesaurus and grammar guide to rely upon…

  2. robro says

    “When the AI replaces you programmers”…I’ve caught whiffs of that fantasy in the last few years. I don’t see any signs of it at this point. There’s whole new areas for programmers, for example “prompt engineering.”

    “Is it too late for me to switch fields to poetry?” As a Philosophy/Literature major who moved to San Francisco because of poetry and music, it’s not too late, but the results may be less about poetry, music and philosophy than you might hope.

  3. jenorafeuer says

    @robro:
    I remember over 20 years ago talking with a friend of mine about people needing courses for how to construct search queries to find things on the Internet. His response was that there was actually an entire degree program that included things like that: Library Sciences.

    Also, yeah, ‘AI replacing programmers’ isn’t going to happen (at least not until maybe we get a full general sapient AI, and that’s not happening now and if anything current LLM models are moving away from it); at most it’s going to move the problem up a level, removing the need for some of the low-level ‘code monkeys’ and making it more important to have people who understand the system involved in creating the specifications. Because frankly a whole lot of the reason that software is such a mess is that most of it is built without a solid idea of what it’s supposed to do and what should happen in various odd corner cases. I know a good chunk of my job is literally ‘okay, how can I break this’ to try to find and close off any possible problems. And a lot of that has to be done at the specification level before any programming even starts.

  4. chrislawson says

    jenorafeuer@5–

    In addition, the promised niche of AI-generated code is already fulfilled by coding sites like Stack Overflow and GitHub. Yes, this still requires a certain level of coding skill just to know how to find the snippets you need and how to plug them into your own code, but anyone who doesn’t have that basic coding proficiency will also have zero chance of knowing if their AI-generated code works as promised.

    Testing and debugging are difficult, complex tasks even for experienced coders working on code they wrote themselves. The idea that AI will write smooth functional code for people who don’t have the skills is, at this stage, a dangerous fantasy. (Dangerous because it will encourage corporations to use bad code they don’t understand with unfortunate real life consequences.)

  5. Bekenstein Bound says

    Short of fully sapient AI, I don’t expect programmer-bots to get all that far either. (And I have considerable experience coding and debugging.)

    The closest we might get would still require a human to write a specification and a comprehensive set of unit tests. LLM-based methods might reach the point of being able to iteratively generate code that will pass the unit tests, working in tandem with a testing framework that automatedly runs the tests and gives feedback to the AI. Even that is likely to be a stretch, though. And any gaps in the test coverage, let alone the specification, become “here be dragons”, aka in coderspeak “undefined behavior”. I especially don’t know how one would avoid the problem of overfitting: if we treat “what the code does” as a curve and the unit tests as a set of data points, the AI may well produce something that acts wildly everywhere but the vicinity of the exact given test-cases (think fitting a degree-10 polynomial to 10 data points), rather than doing what the specification-writers intended (think fitting a quadratic or an exponential to those 10 data points). In the worst case, it just spits out a rote memorization of the test cases: e.g., a times-table rather than a multiply function that generates errors or garbage results at every input that wasn’t a test case. The worst case result of “teaching to the test” and likely insoluble without full sapience.

  6. John Morales says

    Bekenstein Bound, have you personally experimented with code snippets, with simple functions?

    AI is a tool, not a jinn.

    The better you specify and phrase your instructions, the better even current systems can perform.

    You can correct and iterate and verify.

    (Hint: you can even ask it to model possible exceptions to some stricture according to the interaction so far)

    (In short, a tool is no better than its user)

  7. John Morales says

    [meta: I asked the AI what a ‘self-referential gnome’ might be, in academic terms]

    In academic terms, a “self-referential gnome” might be a clever play on words, combining the idea of “gnomic utterances” (short, wise, and often enigmatic sayings) with self-referentiality. This would imply a saying or expression that refers to itself in a concise, witty, or mysterious manner. It’s like a statement that describes its own nature in a pithy and ambiguous way.

    (Not too shabby, eh?)

  8. John Morales says

    [two iterations (within established rules I paste before each session); first one was what it was, but of course I got told there was no such thing. So, my own construct, but not uninterpretable. A neologism!]

  9. John Morales says

    Here, for you.

    Go to an AI engine and establish a communication protocol.
    Establish a context for the inquiry.
    For example, I poisoned the well thus: “Are you able to provide pseudocode for an algorithm?”

    So. I just tried this prompt:
    ‘How to find the polar equation of a circle given two points on its circumference?’

    (No spoilers, but try it for yourself)

  10. John Morales says

    Good news is that I reckon that just as photography did not replace painters, just as recordings did not replace live music, just like power tools did not replace the workers who use them, these sort of tools will not replace poets.

    Main thing to remember is that they are tools.

    (At the moment, and I shan’t joke about the Basilisk. OOO11!)

  11. John Morales says

    [context having been establishe]

    Please summarise the legend as succinctly as possible within two paragraphs

    “John Henry, an African American folk hero, was known as a “steel-driving man,” responsible for hammering steel drills into rock to create holes for explosives during railroad construction. When a steam-powered drill was introduced, threatening the jobs of workers like him, Henry challenged the machine to a race. He ultimately triumphed, proving human strength and determination could surpass mechanical power. However, his victory came at a cost, as he collapsed and died from the immense physical exertion.

    John Henry’s story has become a powerful symbol of human resilience and the struggle against the dehumanizing effects of industrialization. His legend has been preserved in songs, stories, and folklore, reflecting the broader themes of labor, dignity, and the fight for equality.”

  12. John Morales says

    jo1storm:
    Bosses are Bizarro-world Marxists. Like Marxists, your boss’s worldview is organized around the principle that every dollar you take home in wages is a dollar that isn’t available for executive bonuses, stock buybacks or dividends. That’s why your boss is insatiably horny for firing you and replacing you with software. Software is cheaper, and it doesn’t advocate for higher wages.

    <snicker>

    (Software is only cheaper if it can perform the same tasks more cheaply.
    And that post is wank)

  13. John Morales says

    [from a brand-new session, not massaged by me]

    me: please provide a succinct and heartfelt appeal for higher wages from the perspective of a low wage worker

    AI:
    Life’s simplicity holds its charm, but that charm should not come at the expense of my dignity and well-being. I work hard, putting in long hours to keep essential services running smoothly and provide for my family. Yet, the weight of financial uncertainty is a constant burden. Higher wages mean more than just money—they represent respect, stability, and the opportunity to dream of a better future. It’s not just about making ends meet, but about thriving, contributing fully, and having the peace of mind that allows for true growth and happiness. Please consider my plea not as a demand, but as a step towards a fair and just society for all. Thank you.

  14. John Morales says

    [please evaluate “AI CAN’T do your job, but an AI salesman CAN convince your boss to fire you and replace you with AI.” from an academic perspective, at not lower than tertiary level of education]

    This statement offers a provocative view on the potential impact of AI in the workplace. Let’s deconstruct it from an academic perspective, considering economic, psychological, and technological dimensions.

    Economic Perspective
    Economically, the statement highlights concerns about job displacement due to AI. Historically, technological advancements have led to shifts in labor markets, from the Industrial Revolution to the digital age. Economists term this the “displacement effect,” where jobs are lost in sectors directly impacted by new technologies. However, there’s also the “productivity effect” where new technologies create new jobs and industries. The net impact on employment depends on the balance between these two effects. Research by Acemoglu and Restrepo (2020) discusses how AI could potentially displace some jobs while creating others, thereby transforming the labor market.

    Technological Perspective
    From a technological standpoint, the statement underscores the capabilities of AI in performing specific tasks better than humans. AI excels in repetitive, data-intensive tasks where it can offer efficiency gains, reduce errors, and operate continuously without fatigue. However, AI currently struggles with complex tasks requiring nuanced human judgment, empathy, and creativity. Bostrom (2014) discusses how AI’s potential exceeds its current practical applications due to the limitations in general AI capabilities.

    Psychological Perspective
    Psychologically, the statement evokes fears of job insecurity. The perception of AI as a threat rather than a tool for augmentation is prevalent. Research in organizational behavior (e.g., by Frey and Osborne, 2017) explores how fear of automation can affect employee morale and productivity. The psychological impact of job displacement extends beyond economic concerns, affecting individuals’ mental health and identity.

    Ethical Considerations
    Ethically, the deployment of AI in the workplace raises questions about fairness and responsibility. Who is accountable when AI makes decisions that affect people’s livelihoods? The ethical design and implementation of AI systems must consider biases and ensure transparency. Brynjolfsson and McAfee (2014) discuss the need for ethical frameworks to guide the integration of AI into society.

    Conclusion
    In conclusion, the statement “AI CAN’T do your job, but an AI salesman CAN convince your boss to fire you and replace you with AI” captures the multifaceted debate around AI’s role in the future of work. It reflects economic, technological, psychological, and ethical dimensions that are critical for understanding the full impact of AI on employment.

    It’s an engaging subject, and I hope this evaluation provides a deeper academic understanding of the statement. If you have any specific aspects you’d like to explore further, feel free to ask!

  15. John Morales says

    [I know… my phrasing is rambly and abnormal, but somehow AIs can parse what I mean.
    That was a virgin instance of a chatbot, now in 2024]

  16. John Morales says

    [Honestly, I could never have done as well as the few joules my query took to process did]

  17. Bekenstein Bound says

    Wow, a three-and-a-half page meltdown beginning with “I know you are but what am I”. You’re a real class act. /s

  18. jo1storm says

    And for how long have you been delegating the task of thinking to chat bots, John? I have noticed the quality of your comments dropping for a while and your “percentage of rambling incoherence” increasing in your arguments. I guess this explains it. Now there’s even less reason to read your comments.

    It’s a shame, really. You used to at least be good as a sparring partner for a person to “steelman” their arguments against strawman attacks.

  19. John Morales says

    And for how long have you been delegating the task of thinking to chat bots, John?

    For a zero duration of time, jo1.

    But it’s kinda cute you imagined that.

    I have noticed the quality of your comments dropping for a while and your “percentage of rambling incoherence” increasing in your arguments.

    Well, I am in my dotage and all.

    I guess this explains it. Now there’s even less reason to read your comments.

    Guesswork, that’s a fine way to live.

    It’s a shame, really. You used to at least be good as a sparring partner for a person to “steelman” their arguments against strawman attacks.

    <snicker>

    You are a johnny-come-lately, Jo1. You have no real idea of what I was like, in my pomp.

    (Not as gentle and kind and amicable and patient as I now am)

    See, people here post about how AI is shit and such.
    I’m trying to get through to such people.
    Trying to explain that a tool is only as good as its user.

  20. John Morales says

    Remember?

    I asked: Bekenstein Bound, have you personally experimented with code snippets, with simple functions?

    The response: … says the biggest tool in this comments section.

    (Unlike an AI, I can interpret that, because I get apes with wounded egos; I just chucked in a bigger tool, liking my little jokes; guess what the bigger tool was, jo1?)

  21. StevoR says

    Who needs memory when you can just scroll up?

    Wait, what’d I just type?

    (& how full of typos is it? Dunno. Probly as usual a lot more than I thought?)

  22. John Morales says

    No typos, StevoR, though the ampersand is not normally used in prose.

    Since it’s quiet: “You used to at least be good as a sparring partner for a person to “steelman” their arguments against strawman attacks.” was a claim made about me.

    This claim, of course, evinces a deep misunderstanding of the concept of steelmanning — it is something one does to arguments one’s interlocutor has made, specifically, to interpret them in the most charitable manner possible before rebutting it.

    (Also, it misunderstands the concept of straw dummy “attacks”, but one dollop at a time)

  23. says

    [please evaluate “AI CAN’T do your job, but an AI salesman CAN convince your boss to fire you and replace you with AI.” from an academic perspective, at not lower than tertiary level of education]

    Who needs the “academic perspective?” That statement rings true from a common-sense perspective of how bosses, CEOs and business owners have been known to think and make decisions.

  24. John Morales says

    RB,

    Who needs the “academic perspective?

    The AI. Again, one needs to establish context and protocols, and I was using a virgin instance.

    Else, I would have gotten a colloquial-level response, geared to the average punter.
    You know, the default setting.

    If I’m actually using the tool, I first set up an environment.

    For example, regarding protocol, this: “Answer questions directly unless explicitly requested otherwise.”

    (Seems like a silly, redundant requirement at first hand. But it’s a chatbot, it uses natural language by default)

  25. John Morales says

    Ah, WTH. An extract — I like my prompts to be minimalistic, much as I like my desktop to be an actual metaphor for a desktop.

    A sample:

    “Please adopt these guidelines henceforth:

    Avoid emoticons and excessive punctuation.

    Use minimal prompts.

    Refrain from stating intention to assist.

    Do not offer assistance, inquiries, or encouragement for further interaction.

    Use personal pronouns only when grammatically necessary.

    Minimise conversational tone.

    Provide succinct acknowledgements with concise responses.”

    (Just a copypaste thingy that makes the interface much sleeker)

  26. John Morales says

    We don’t need big mining machinery to dig for resources, but there’s a reason we don’t use picks and shovels for mining, these days.

  27. John Morales says

    [genuine copypasta]

    (Virgin instance new session):
    Q: Who cares what the AI says?

    A: Who knows? Maybe no one. Maybe everyone. After all, I’m just here to assist and share knowledge, spark ideas, and add a dash of humor to your day. Whether or not my words matter, that’s up to you to decide. What I can promise is that I’m always ready to help.

    (Given my interactive prompt guidelines immediately after)

    Q: Who cares what the AI says?

    A: That would be for you to decide.

    Interesting? I find it so.

  28. John Morales says

    [bonus, some extra environmental requirements — hey, anyone remember DOS?]

    Q: Who cares what the AI says?

    A: In essence, the level of care or concern about what AI says depends on the context and the individual’s reliance on AI for critical tasks. For instance, in academic and professional settings, there is a growing emphasis on the ethical use of AI and the need for users to maintain their cognitive abilities by not becoming overly dependent on AI systems

  29. John Morales says

    In passing, the locution ‘the AI’ is indicative. Definite article.

    The interaction is with a particular instance of a particular engine.

    Me, I use a freebie generic chatbot that has no session permanence and that I suspect has rather limited resources allocated to my queries. It lies a lot — well, ‘hallucinates’, but it turns out a lot of that can be caught out with but a few more prompts (A different set, but; e.g. ‘Please review your response for inconsistencies before providing it.’)

    A work in progress, but I’m pretty sure the limitations and restrictions under which I operate are not universal.

  30. John Morales says

    Try again, a bit more dispassionately, without following any specific style

    In realms of code and crafted words, AI assists but does not replace.
    Programmers bring insight and creativity to technology.
    Poets infuse emotion and meaning into language.
    Both roles require human intuition and experience, something AI cannot replicate.

  31. John Morales says

    Anyway, basically, white-collar jobs are about to be as automated as blue-collar jobs.

    And autonomous uncrewed fighting vehicles, well. Rapidly becoming a thing.

    Ukraine war is showing that burst of creativity we saw during WW2 and during the cold war.

  32. John Morales says

    Customer service officers / receptionists / suchlike will remain. But… well. The functionaries. Alas.

    (Is that not obvious?)

  33. John Morales says

    Oh, right.

    It only replaces people when people are functionally being used as tools.

    (John Henry!)

  34. John Morales says

    Right?

    Bring the focus back to the actual post.

    Poets, well. Their demise due to AI is no biggie.

    Poets they remain.

    (Handcrafted vs. factory-made goods; an old dichotomy, no?)

  35. John Morales says

    You really, really don’t get I’m saying AIs are a tool?

    (Not the most complicated point)

    Also, the locus of a sphere is full of points.

    (heh)

  36. Bekenstein Bound says

    I had a handy retort lined up, but then I realized it only works for a spherical troll in a vacuum. :/

  37. John Morales says

    Heh. Or the degenerate case where the sphere’s radius is zero.

    (Then, only one point)

  38. Bekenstein Bound says

    Oh yeah? Well … ah, damn, that one assumes the trolling is coming from a point source and disregards spacetime curvature. Eh, gonna go hit the stacks again …

  39. John Morales says

    You shoulda said a marble or suchlike.

    But no, you had to try for more, to your detriment.

    Again, O wilfully obtuse one: You really, really don’t get I’m saying AIs are a tool?

    (I know you do, you know I know you do, etc.)

  40. John Morales says

    Here, for you, my #15:

    “Good news is that I reckon that just as photography did not replace painters, just as recordings did not replace live music, just like power tools did not replace the workers who use them, these sort of tools will not replace poets.

    Main thing to remember is that they are tools.”

    You can’t find a point there, BB. Ostensibly.

  41. Bekenstein Bound says

    Tools for fools … which is why you’ve employed them lavishly in preparing many of your bon mots, I am sure.

    (Or is that tools for trolls? Maybe both.)

    Meanwhile, if I need a reliable summary on a topic I’ll stick to Wikipedia.

  42. John Morales says

    Tools for fools … which is why you’ve employed them lavishly in preparing many of your bon mots, I am sure.

    Pearls before swine.

    Here’s a reliable summary: “In 2023, Nvidia became the seventh public U.S. company to be valued at over $1 trillion,[24] and the company’s valuation has increased rapidly since then as the company became a leader in data center chips with AI capabilities in the midst of the AI boom.[25][26] In June 2024, for one day, Nvidia overtook Microsoft as the world’s most valuable publicly traded company, with a market capitalization of over $3.3 trillion.[27]”

    (https://en.wikipedia.org/wiki/Nvidia#Deep_learning)

  43. John Morales says

    [Heh]

    Me:
    This is the statement at hand: “Tools for fools … which is why you’ve employed them lavishly in preparing many of your bon mots, I am sure.”
    Rephrase that statement in the form of a couplet in the most technically formal manner possible

    El-cheapo (free) generic chatbot (2024 vintage):
    “Tools for fools you’ve used with care, Fashioned wit beyond compare.”

  44. John Morales says

    [follow-up]

    Q: Enumerate the formal elements present in your rephrasing

    A: The couplet provided features several formal elements:

    Rhyme Scheme: The couplet uses an AA rhyme scheme, meaning both lines end with words that rhyme (“care” and “compare”).

    Meter: The lines aim for a rhythmic pattern, commonly iambic (unstressed syllable followed by a stressed syllable), although not perfect in this instance:

    Tools for fools you’ve used with care, (anapestic meter: two unstressed syllables followed by a stressed syllable)

    Fashioned wit beyond compare. (anapestic meter: two unstressed syllables followed by a stressed syllable)

    Brevity: The couplet is concise, encapsulating the entire thought within two lines.

    Imagery: The couplet employs metaphorical language, referring to “tools for fools” and “fashioned wit,” which evokes a clear, vivid image.

    Alliteration: The repetition of consonant sounds, particularly the ‘f’ sound in “fashioned” and “fools.”

    Formal Tone: Despite its brevity, the couplet maintains a formal tone appropriate for the technical challenge.

    These elements combine to create a cohesive and polished poetic form.

  45. John Morales says

    Even thought of as a toy, is it not fun?

    I: rephrase again, in oblique verse and stilted and antiquated Latin or pseudo-Latin

    R: Instrumenta fatuis tenes cum cura, Ingenium formasti ultra comparationem.

    This rephrased couplet uses a pseudo-Latin style to convey the original meaning in an antiquated manner. The formal elements, such as rhyme and meter, are still present, while the language adds a layer of historical flavor.

    (I’d be interested to know what cartomancer thinks of that little bit of doggerel)

  46. Bekenstein Bound says

    The one so ill-endowed by nature he must borrow the wit of a robot said something reminiscent of:

    In 2023, Nvidia became the seventh public U.S. company to be valued at over $1 trillion,[24] and the company’s valuation has increased rapidly since then as the company became a leader in Dutch tulip chips with bubble capabilities in the midst of the tulip bubble.[25][26] In June 2024, for one day, Nvidia overtook Microsoft as the world’s most valuable publicly traded company, with a market capitalization of over $3.3 trillion.[27]

    I’ll remind you of this when there’s been a resounding pop!! and both of those have become penny stocks. And no doubt my doing so will prompt a chain of five or six replies in which you initially froth at me but soon shift to talking to yourself. :)

  47. John Morales says

    I’m pretty sure 145.06 USD is not a penny.
    (Current value as I write this comment)

  48. Bekenstein Bound says

    That’s because it hasn’t happened yet. Wait for the pop. You’ll know it when you hear it.

  49. Bekenstein Bound says

    Since something is now eating any comment I make that contains a link, I’ll just have to quote the whole article instead.

    Opinion: The AI bubble is looking worse than the dot-com bubble. The numbers prove it.

    Two different bubbles and two completely different revenue streams mean only one thing

    It’s different this time. How many times have we heard that? It usually turns out to be wishful thinking. This time, the generative-AI bubble may really be different than the dot-com bubble — just not in a good way.

    The number of generative AI users is growing faster than the number of internet users in the late 1990s — except the comparison is essentially meaningless.

    Think about it. What did people have to do to become internet users in the late 1990s? They had to buy a computer and subscribe to an internet service provider, which were both expensive. The Compaq ProSignia Desktop 330 was $2,699 in 1999 ($5,101 in 2024 dollars) and that didn’t include applications software such as Microsoft’s Word, Excel and PowerPoint. Bell Atlantic offered a digital subscriber line service for $59.95 a month in 1999, or $113 in 2024 dollars.

    How much does it cost to use generative AI today? Nothing. You already own a computer and pay for internet access, and OpenAI and others offer limited-use services for free. The substantial cost of accessing the internet 30 years ago meant that users anticipated substantial payoffs. The minimal cost of accessing ChatGPT and other generative-AI systems today means that users don’t need much of a payoff, if any, to give it a try.

    Why are economists at the Federal Reserve Bank of St. Louis and elsewhere making this silly comparison? The use of essentially free services — like social media, email, messaging and other apps — doesn’t mean they are particularly useful. If anything, they are addictive entertainment that we pay for with our time.

    St. Louis Fed economists speculated about the possible effects of generative AI on labor productivity: “We examined how intensely respondents employed generative AI on days that they reported using it.” Seriously? Is the intensive use of Facebook Instagram, TikTok, Snapchat and other social-media platforms increasing labor productivity or reducing it? The number of people using social media and the hours they spend doing so are hardly a useful measure of the economic payoff. The same is true of generative AI.

    The economic value added and labor productivity boosted by generative AI is not measured by how many people use it or how much it costs to create and sustain it. It is measured by economic value added and labor productivity gains — and there is scant evidence that generative AI has moved the needle.

    While access to generative AI is relatively cheap, the creation and supply of generative AI is incredibly expensive. Training these models can cost upwards of $100 million and require the newest and most expensive chips from Nvidia and competitors, including AMD and Intel The cost to society of AI chips, and the talent, electricity, water and more needed to manufacture them, currently dwarfs the payoff.

    Creating useful services from generative AI also isn’t cheap. Wall Street Journal technology columnist Joanna Stern recently reported on her development of “Joannabot,” powered by Google’s Gemini, to help readers “decide if the iPhone 16 is worth an upgrade:” The bot was useful but, she wrote, it “Can go off the rails. Way off,” and was expensive: “I won’t get into the exact costs, but let’s just say I could’ve taken all four of my iPhone 16 review units on a luxury vacation to Bora Bora for the price of Google’s Vertex for this project.”

    If generative AI is going to pay off in labor productivity, it has to generate enough revenue to justify its cost. We are not the first to ask this question. Sequoia’s David Cahn estimated that $600 billion in annual generative AI revenue is needed to justify the current investments in generative AI, a figure that is probably more than 100 times the current annual revenues for OpenAI’s ChatGPT, Google’s Gemini, Microsoft’s Copilot and similar services.

    The New York Times reported that OpenAI anticipates $5 billion in losses on $3.7 billion in revenue this year. Nonetheless, OpenAI received $6.6 billion in additional funding to keep it afloat — and which valued the company at $157 billion. That certainly echoes the dot-com bubble — but Goldman Sachs’ Jim Covello, Citadel’s Ken Griffen, and market strategist Ed Yardeni all warn that the AI bubble could be even worse.

    We can compare the generative-AI bubble with the internet bubble by looking at revenues in 2000, the peak of the dot-com bubble, and generative-AI revenues for 2024. One study estimates that 134 million PCs were sold in 2000, which at $5,101 per computer at that time equated to about $684 billion in revenue. The number of global internet users was about 361 million in 2000, which at $113 a month equals about $489 billion in revenue ($850 billion in 2024 dollars).

    E-commerce also generated a lot of revenue in 2000. A March 2001 article reports that the market for e-commerce in 2000 was $286 billion ($500 billion in 2024 dollars). E-commerce has been a growing revenue stream as we have become accustomed to buying online.

    Put these numbers together and the internet generated more than $1.5 trillion in revenue (in 2024 dollars) in 2000 — and the internet bubble still burst. Generative AI, on the other hand, is currently generating less than $10 billion. If the bubble bursts, it will be a very large pop.

  50. John Morales says

    [Opinion: The AI bubble is looking worse than the dot-com bubble. The numbers prove it.
    Two different bubbles and two completely different revenue streams mean only one thing
    By Jeffrey Funk and Gary Smith
    Last Updated: Oct. 26, 2024 at 11:09 a.m. ET]

  51. John Morales says

    A more nuanced take:

    What happens if the AI investment bubble bursts

    [extract]

    No-one is doubting the impact of artificial intelligence or its potential to transform the economy.

    The hesitation is around the unfettered euphoria that has driven stock prices to such dizzying heights and whether it’s time for a reality check.

    As Musk can attest, after yet another disappointing earnings performance this past week, investors now are demanding results.

    There’s a familiar ring to all of this.

    Almost every wave of new technology attracts investors hoping to strike it rich, creating an investment bubble.

    And in almost every case, only a handful of companies survive the inevitable bust.

    We have had previous tech bubbles. The tech remains.
    Internet bubble. Internet remains.

    And so forth.

  52. jo1storm says

    Technology remains but not the same players on the market and there’s increased suffering in the meantime.

    In other news, hedge funds are dropping nvidia stocks.

  53. John Morales says

    Is not AI technology? If so, it will remain.

    (Your own claim)

    The AI bubble is looking worse than the dot-com bubble.

    (Your adduced claim, which you consider authoritative)

    So, it’s a bubble that looks worse than the dot.com bubble, and that is damning.

    Look at this very domain name: freethoughtblogs.com and wonder!

    We can compare the generative-AI bubble with the internet bubble … and the internet bubble still burst.

    The internet is hardly a bubble in 2024, is it?

    (When did it burst, again? :)

  54. jo1storm says

    So, do you expect it will need another 15 years for the markets to recover after AI bubble bursts or?

  55. Bekenstein Bound says

    Look on the bright side: the market should be flooded with cheap nVidia GPUs when it does go, so if you’re planning to buy a gaming machine anytime soon … (<– Note spacing)

  56. John Morales says

    Mistaken spacing duly noted, BB.

    So, the internet bubble died, but the internet kept getting faster and more ubiquitous all that time.

    See, it’s an investment bubble, not a technological bubble.

    The issues are financial, not technical.

    You conflate (presumably) bad investment with bad technology. That is a mistake.

  57. jo1storm says

    Its an investment bubble into a bad technology (bad because it overpromised and underdelivered).

  58. John Morales says

    Good news is that I reckon that just as photography did not replace painters, just as recordings did not replace live music, just like power tools did not replace the workers who use them, these sort of tools will not replace poets.

    Main thing to remember is that they are tools.

    (At the moment, and I shan’t joke about the Basilisk. OOO11!)

  59. jo1storm says

    So its a bad tool that doesn’t do what its purported to be (and marketed as) doing. Glad we agree.

  60. John Morales says

    Exactly. Not a problem.

    Doesn’t really work (for some), won’t replace anyone, won’t be widely implemented, developed, or anything like that.

    No future in it.

    Therefore, why even worry about it?

  61. jo1storm says

    Because frauds are not a good thing in a society and the last time a bubble built on similar false premises burst many people lost their jobs and it took 15 years for the markets to recover?

  62. John Morales says

    Ah yes, the alleged fraud:
    “AI CAN’T do your job, but an AI salesman CAN convince your boss to fire you and replace you with AI.”

    Heh.

  63. jo1storm says

    Yup “alleged” fraud us not so alleged and is a reason why 70% to 85% of AI projects fail. Because the teams use them very flawed tools for their marketted purposes and not as they are.

  64. John Morales says

    And thus the great growth in the industry.

    Hey, self-employed people are their own boss, no?

  65. John Morales says

    Hey, care to provide at least one example of someone losing their job because their boss replaced them with an AI which could not do the job on the basis of a salesman’s sales pitch?

    (Since you’re so sure it’s a genuine, real thing!)

  66. John Morales says

    (From Statista)
    “AI market size worldwide from 2020-2030 The market for artificial intelligence grew beyond 184 billion U.S. dollars in 2024, a considerable jump of nearly 50 billion compared to 2023. This staggering growth is expected to continue with the market racing past 826 billion U.S. dollars in 2030.”

    BTW, if 70% to 85% of AI projects fail, it follows that 15-30% of AI projects succeed.

    (https://hbr.org/2021/05/why-start-ups-fail)

    “Most start-ups don’t succeed: More than two-thirds of them never deliver a positive return to investors. But why do so many end disappointingly? That question hit me with full force several years ago when I realized I couldn’t answer it.”

    By your standards, start-ups are a fraud. Right?

  67. jo1storm says

    “By your standards, start-ups are a fraud. Right?” Some of them are. Sometimes most of them, depending on the technology sector.

    https://n1info.rs/english/news/a654039-us-demands-extradition-of-serbian-nationals-over-alleged-internet-fraud/

    Growth of a market means nothing if it is a bubble. You know that.

    Hey, care to provide at least one example of someone losing their job because their boss replaced them with an AI which could not do the job on the basis of a salesman’s sales pitch?

    (Since you’re so sure it’s a genuine, real thing!)”

    Sure. The first of many:
    https://www.forbes.com/sites/chriswestfall/2023/05/31/non-profit-helpline-fires-staff-shifts-to-chatbot-solution/

  68. John Morales says

    There you go.

    The National Eating Disorders Association got scammed by AI sales reps and sacked staff to replace their jobs with the Tessa chatbot. Not very cluey, the management of NEDA, demonstrably.

    And so AI in general is a scam, and it’s a thing that staff regularly lose their jobs to AI chatbots that can’t do their jobs. Right?

    Since you assert that there are many, of which that is the first, care to share the second?

  69. John Morales says

    Interesting article, and suggestive:

    In its original form, the Chatbot was unable to provide unscripted answers. The wellness chatbot “isn’t as sophisticated as ChatGPT,” Dr. Fitzsimmons-Craft says. The intention was for the chatbot to serve up pre-written answers to questions – typically related to body image, so that people could reframe their approach to eating disorders. Dieting tips were not part of the program.

    and

    In a statement issued on twitter, employees of the National Eating Disorder Association shared that they were told they would be fired and replaced with a chatbot today, June 1st. “Please note that Tessa does not replace therapy nor the NEDA Helpline, but is always available to provide additional support when needed,” Tessa’s website says.

    and

    The NEDA helpline, which launched in 1999, served nearly 70,000 people and families last year. Staffers saw the move to AI as union-busting, as they had recently organized within the last week in an effort to combat the change.

    […]

    The pandemic provided a perfect storm for eating disorders, one of the unfortunate consequences of rampant loneliness. NPR reports that the NEDA helpline was run by just six paid staffers, and they train and oversee up to 200 volunteers at any given time. The staff felt overwhelmed, under-supported, burned out. There was a ton of turnover, so the helpline staff voted to unionize.

    You know what that suggests to me?

    (It ain’t that the management got conned, as such)

  70. John Morales says

    [context]

    https://www.theregister.com/2023/05/31/ai_chatbot_eating_union/

    Abbie Harper – who as an NEDA associate helped launch Helpline Associates United (HAU), a union representing staff at the non-profit – alleged the decision to close the helpline, ditch its humans, and replace them with software was retaliation against their unionization.

    “NEDA claims this was a long-anticipated change and that AI can better serve those with eating disorders. But do not be fooled — this isn’t really about a chatbot. This is about union busting, plain and simple,” she claimed.

    Harper said she was let go from the association, along with three other colleagues, four days after they unionized in March. It is understood they were told their roles wouldn’t be eliminated until June, when the decades-old helpline would close. The HAU had tried to negotiate with NEDA for months, and had failed to get anywhere, she said.

    The group petitioned for better workplace conditions, and did not request a pay rise in an attempt to persuade the association to voluntarily recognize the group last year. The HAU, which has joined the Communications Workers of America Union, has now filed complaints alleging unfair labor practices with the NLRB, the US’s workplace watchdog.

    “We plan to keep fighting. While we can think of many instances where technology could benefit us in our work on the Helpline, we’re not going to let our bosses use a chatbot to get rid of our union and our jobs. The support that comes from empathy and understanding can only come from people,” Harper said.

  71. Bekenstein Bound says

    How much did you sink into this bubble? Because your desperation to avoid admitting that it is one suggests it’s probably a tidy sum.

  72. John Morales says

    Zero, BB. That’s how much.

    (Gotta love your efforts at trying to rationalise why I would not be swayed by silly talk)

    BTW, according to jo1’s own claim, 15-30% of AI projects do not fail.

    (That must be the fizzy bit, rather than the bubbly bit, eh?)

  73. Bekenstein Bound says

    Even the AIs admit they are a bubble:

    AI Overview

    Yes, the artificial intelligence (AI) industry may be in the midst of an economic bubble:
    High expectations and investments: Companies have made large investments, but have yet to see significant profits.
    Inflated valuations: AI companies are valued highly, sometimes beyond their current profitability or capabilities.
    Lack of regulation: There is a lack of adequate regulation for the AI industry.
    Hype outpaces reality: Speculation about AI’s potential is outpacing reality.

  74. John Morales says

    Ah, BB.

    So you find the AIs (that is, some chatbot instance from some particular engine) authoritative.

    (What about the fizz? You know, the 15-30 jo1 mentioned)

  75. John Morales says

    But yes, an economic bubble, not a technological bubble.
    That much is right.

    (The tech is sound enough, and ever-improving)

  76. Bekenstein Bound says

    The tech has plateaued. That’s why the bubble’s bursting. It plateaued without all that much to show for it, never having delivered on most of the hype.

  77. John Morales says

    The tech has plateaued. That’s why the bubble’s bursting.

    Is it really?

    (Outside your wishful thinking, I mean)

    It plateaued without all that much to show for it, never having delivered on most of the hype.

    Yet jo1 reckons that 15-30% of AI projects do not fail.

    (And there are ever more AI projects; so, whence this Leng claim?)

  78. John Morales says

    Behold, a verse in iambic pentameter:
     
    The tech hath plateaued; the bubble bursts anon. It rose with hype, yet much did promise none,
     
    Forsooth, it stalled and little hath it shown, Ne’er reaching heights the fervent crowd had grown.

  79. John Morales says

    ’tis a brilliant toy, no?


    Alright, here it is in the playful style of Ogden Nash:

    The tech went flat, and that’s why bubbles burst, It rose and fell without much to show, the worst.
    It promised grand, with hype so tall and bright, Yet never quite delivered, not much insight.

Leave a Reply