Over at Pharyngula, special jackbooted operative raven floated a dangerous idea: [pha]
I suppose Tucker Carlson wants one of the M&Ms to wave a Swastika flag around or carry an AR 15 rifle or something. The right wingnut patriot M&M.
Over at Pharyngula, special jackbooted operative raven floated a dangerous idea: [pha]
I suppose Tucker Carlson wants one of the M&Ms to wave a Swastika flag around or carry an AR 15 rifle or something. The right wingnut patriot M&M.
If everything you read on the internet was written by AIs, would you care?
I’ve been struggling with a problem: “what happens if someone tells an AI to ‘code a better version of yourself?’ and – whoosh – the singularity happens?
One of the kids in the wargaming group went off on vacation in the midwest and came back with a new game: Dungeons and Dragons.
This is going to be interesting. No, I lied, it’s going to be entirely predictable and fairly ho-hum. But I’ll be interested.
A typical AI model like GPT-3 now contains beelyuns (billions) of decision-points – it’s a huge probability map of all of the potential answers that have generally been given before.
I’m fascinated by how the AI models get better extremely fast once they are exposed to a few thousand users frantically working at them.
By now you have probably heard of Dall-E and Midjourney, etc.
When I read Robert Coram’s Boyd [wc] I was fascinated. Here was a fellow who appears to have been two things: 1- a strategic genius and 2- a really fast thinker. Coram (and others, including Chuck Spinney) have long held Boyd forward as a innovator who re-invented the art of war, but I respectfully must disagree.
Doubtless, large swathes of this story are covered by blankets of classification; it’s one of the governments’ best tools for hiding incompetence. Because – not to mince words – the only way this thing could have happened is massive incompetence.
