So…you wanna be a science communicator?


Good for you, but I have to warn you that there some discouraging developments. There is a peculiar segment of society that will want to outlaw you.

Abortion is on the ballot in South Dakota. Insulin prices were a key issue in the June debate between Biden and Trump. The U.S. Surgeon General declared gun violence a public health crisis, and the Florida governor called the declaration a pretext to “violate the Second Amendment.”

In an intense presidential election year, the issue of anti-science harassment is likely to worsen. Universities must act now to mitigate the harm of online harassment.

We can’t just wish away the harsh political divisions shaping anti-science harassment. Columbia University’s Silencing Science Tracker has logged five government efforts to restrict science research so far this year, including the Arizona State Senate passing a bill that would prohibit the use of public funds to address climate change and allow state residents to file lawsuits to enforce the prohibition. The potential chaos and chilling effect of such a bill, even if it does not become law, cannot be understated. And it is just one piece of a larger landscape of anti-science legislation impacting reproductive health, antiracism efforts, gender affirming healthcare, climate science and vaccine development.

It’s a bit annoying that the article talks about these dire threats to science-based policy, but doesn’t mention the word “Republican” once. It’s not that the Democrats are immune (anyone remember William Proxmire, Democratic senator from Wisconsin?), but that right now Republicans are pushing an ideological fantasy and they don’t like reality-based people promoting science.

One other bit of information here is that science communication does not pay well, and you rely on a more solid financial base: a university position, or a regular column in a magazine or newspaper, anything to get you through dry spells. You can try freelancing it, but then you’re vulnerable to any attack, and hey, did you know that in America health insurance is tied to your employment? I’ve got the university position, which is nice, but that’s a job, and your employers expect you to work, and it’s definitely not a 40 hour work week sort of thing.

It doesn’t matter, though, because those employers are salivating at the possibility of replacing you with AI.

But AI-generated articles are already being written and their latest appearance in the media signals a worrying development. Last week, it was revealed staff and contributors to Cosmos claim they weren’t consulted about the rollout of explainer articles billed as having been written by generative artificial intelligence. The articles cover topics like “what is a black hole?” and “what are carbon sinks?” At least one of them contained inaccuracies. The explainers were created by OpenAI’s GPT-4 and then fact-checked against Cosmos’s 15,000-article strong archive.

Full details of the publication’s use of AI were published by the ABC on August 8. In that article, CSIRO Publishing, an independent arm of CSIRO and the current publisher of Cosmos, stated the AI-generated articles were an “experimental project” to assess the “possible usefulness (and risks)” of using a model like GPT-4 to “assist our science communication professionals to produce draft science explainer articles”. Two former editors said that editorial staff at Cosmos were not told about the proposed custom AI service. It comes just four months after Cosmos made five of its eight staff redundant.

So the publisher slashed its staff, then started exploring the idea of having ChatGPT produce content for their popular science magazine, Cosmos. They got caught and are now back-pedaling. You know they’ll try again. And again. And again. Universities would love to replace their professors with AI, too, but they aren’t even close to that capability yet, and they have a different solution: replace professors with cheap, overworked adjuncts. They won’t have the time to do science outreach to the general public.

Also, all of this is going on as public trust in AI is failing.

Public trust in AI is shifting in a direction contrary to what companies like Google, OpenAI and Microsoft are hoping for, as suggested by a recent survey based on Edelman data.

The study suggests that trust in companies building and selling AI tools dropped to 53%, compared to 61% five years ago.

While the decline wasn’t as severe in less developed countries, in the U.S., it was even more pronounced, falling from 50% to just 35%.

That’s a terrible article, by the way. It’s by an AI promoter who is shocked that not everyone loves AI, and doesn’t understand why. After all,

We are told that AI will cure disease, clean up the damage we’re doing to the environment, help us explore space and create a fairer society.

Has he considered the possibility that AI is doing none of those things, that the jobs are still falling on people’s shoulders?

Maybe he needs an AI to write a science explainer for him.

Shut up, AI

Comments

  1. kenbakermn says

    They claim that AI will somehow help clean up the environment. But those AI data centers, of which there are many, consume gigantic resources to produce laughably unreliable results.

  2. raven says

    Why should I trust AI search results?

    They are right a lot of the time.
    They are also often wrong.

    Anything important from Google AI search results, I end up checking anyway.
    So why not just omit that step?

  3. robro says

    We are told that AI will cure disease, clean up the damage we’re doing to the environment, help us explore space and create a fairer society.

    Most of that is ridiculous on the face of it, at least for the foreseeable future. AI in its various forms including LLMs and generative AI, may help with some things, like exploring space or researching cures for diseases, but it’s not going to do anything itself. Based on my limited experience, they take a lot of work to build and are (so far) only useful as “recommenders.” If you use autocorrect while texting you use AI recommenders. But you use them and monitor them because they aren’t self-sufficient and they are far from perfect.

    On top of all that there are some problems with the business model. For example, there are already plagiarism claims over gen-AI produced papers incorporating verbatim paragraphs from published, copyrighted works. “Fair use” has limits. The lawyers will love it…and they might use AI to research their cases.

  4. says

    They claim that AI will somehow help clean up the environment. But those AI data centers, of which there are many, consume gigantic resources to produce laughably unreliable results.

    That’s the part that gets me every time. It’s a lot of work and resources for dubious output. One thing I’d like to see more of is effort dedicated to getting the most out of every operation or every joule powering the computers, rather than simply piling on more processing power.

    We can do some neat tricks with big data, but that’s not going to change the world for the better, unless we can mitigate the need for “big”.

  5. Walter Solomon says

    If we haven’t achieved any of these very nice sounding goals after 50+ years of using microprocessors, why do these wiz kids believe AI will be such a gamechanger?

    That said, I am a fan of ChatGPT. It’s fun to use. I have it give me creative writing prompts and then make suggestions on how to improve my writing. It’s fun but it’s not changing the world.

  6. says

    The reason why the CSIRO made several of its Cosmos writers redundant is because successive anti-science conservative governments have slashed its operating budget. The SIRO hs made an enormous contribution to Australia’s economy across many fields. Some important technological breakthroughs which opened up new markets were allowed to wither because the economic rationalists didn’t believe in government support and protection for fledgling industries. One conservative Prime Monster went so far as to close down a climate change research section which now only continues through public contributions. There is an all-out assault on science and a progressive dumbing down of the population by conservative politicians ad the media. Much the same as what is happening in America.

  7. outis says

    Baaaah, this AI obsession is really getting under my skin. Apart from some impressive, specialized applications in the sciences I see precious little that is “revolutionary”, apart from a lot of grifting and general stupidity.
    The sooner everyone realizes it’s another hype cycle the better. How many “revolutions” were we promised, and how many actually materialized? Grrrr – better I stop here.

  8. curbyrdogma says

    Beyond the ethical issues of AI (deepfakes; using it to replace actual humans, etc.) is that sometimes the “I” stands for “inaccurate”. Stock photo sites (including Adobe) are already being overrun with AI-generated images, which are overtaking search engine results. Sometimes the inaccuracies are pretty obvious to the layman — such as the mangled hands, multiple nostrils, etc. — but if there are no experts who can fact-check the articles or images, it will inevitably contribute to the continued dumbing-down of the population. …Even for relatively simple subjects such as animal identification. Examples:

    https://pngtree.com/free-backgrounds-photos/capybara-yawn-teeth-pictures
    https://cdn.animalmatchup.com/images/matchups/highlight/sloth-bear-vs-sun-bear/karate/sloth-bear-animalmatchup.com.webp
    https://www.freepik.com/premium-ai-image/baby-sloth-is-held-hand_46642104.htm
    https://openart.ai/community/OAVFlHZ9GCfjfzVSr477

    The above may or may not have been done just for fun, but there’s no indication of the author’s intent. One can only imagine the goofs for more technical media such as scientific illustration. I would suggest developing a resource site that is actually monitored by real scientists and experts, kind of like an “Angie’s List” for science related material.

  9. Jack Krebs says

    A FaceBook friend of mine recently wrote this poem (without AI!) that I think is very good.

    AI and the Arts
    I challenge you, Mr. Artificial Intelligence,
    will your final results,
    which may appear to be the same as mine,
    show the tear-stained paper
    with noticeable reworked marks
    of an inspired poem?

    Will an ignited spark of a memory
    giving an author the ability
    to transfer ideas to words
    pouring from his head through his hand
    to write and retell a story
    be honored?

    Will your masterpiece express
    the calmed serenity of an artist,
    producing line upon line
    on a caressed canvas,
    reveal the enthusiasm
    of inflaming splashed color?

    Will your final piece communicate
    the quiver of excitement
    of being part of a musical group,
    transposing synchronized black notes
    to melodious sounds,
    evoke a common emotion
    that magically waves across
    multitudes of listeners?

    Will you be able to
    accurately imitate an actor,
    who uses his experienced emotional past
    to project himself into another’s being,
    produce a realistic character
    who is relatable to an audience?

    Will your images
    generate the muscular pain,
    earned sweat, and heaving lungs
    from a completed dance
    make the viewer’s mind
    emulate and spin?

    There is an inspired process
    and a painful madness
    to engender the majesty of the arts
    and a sense of accomplished joy.
    The sweat, the tears, the mistakes,
    the practice, the perseverance,
    the connectivity of souls,
    these are the things
    that make creative arts real.

    You, Mr. Artificial Intelligence,
    are only an empty imitation,
    an imposter, an impersonator,
    a robber of my inspiration and brain.
    You may deem your final results
    to be faster and superior in quality,
    but, I wonder . . . ?

    By Cheryl Lynn Delcour
    August 12, 2024

  10. Jack Krebs says

    A FaceBook friend of mine recently wrote this poem (without AI!) that I think is very good.

    AI and the Arts
    I challenge you, Mr. Artificial Intelligence,
    will your final results,
    which may appear to be the same as mine,
    show the tear-stained paper
    with noticeable reworked marks
    of an inspired poem?

    Will an ignited spark of a memory
    giving an author the ability
    to transfer ideas to words
    pouring from his head through his hand
    to write and retell a story
    be honored?

    Will your masterpiece express
    the calmed serenity of an artist,
    producing line upon line
    on a caressed canvas,
    reveal the enthusiasm
    of inflaming splashed color?

    Will your final piece communicate
    the quiver of excitement
    of being part of a musical group,
    transposing synchronized black notes
    to melodious sounds,
    evoke a common emotion
    that magically waves across
    multitudes of listeners?

    Will you be able to
    accurately imitate an actor,
    who uses his experienced emotional past
    to project himself into another’s being,
    produce a realistic character
    who is relatable to an audience?

    Will your images
    generate the muscular pain,
    earned sweat, and heaving lungs
    from a completed dance
    make the viewer’s mind
    emulate and spin?

    There is an inspired process
    and a painful madness
    to engender the majesty of the arts
    and a sense of accomplished joy.
    The sweat, the tears, the mistakes,
    the practice, the perseverance,
    the connectivity of souls,
    these are the things
    that make creative arts real.

    You, Mr. Artificial Intelligence,
    are only an empty imitation,
    an imposter, an impersonator,
    a robber of my inspiration and brain.
    You may deem your final results
    to be faster and superior in quality,
    but, I wonder . . . ?

    By Cheryl Lynn Delcour
    August 12, 2024