Marshal McLuhan said something along the lines of people prefer to hear bad news because there’s always bad news and they know how to react to that. Say that’s too bad and move on. But when people hear good news they get uncomfortable because good news means change and people don’t know how to feel about change.
The last 6 months have seen massive progress in the availability of consumer AI products. We now have access to powerful models that can generate images (Stable Diffusion), code (Github Copilot), text and code (ChatGPT), music (Amper Music)
Everyone is worried about technology taking their job. But in my experience any time saved from automation is immediately filled with more sometimes higher value tasks. Otherwise we’d still be hunting and gathering. The last few years have seen an explosion in ease in deploying machine learning models and you could do some automations for tasks that require some light decision making.
One of my pet economic theories is that decisions in the economy aren’t driven to maximize profits but rather to maximize ego. This is especially true at big organizations where folks often scheme and maneuver to increase their influence with successful projects and growing their team. As long as growing headcount is seen as a way to be conventionally successful I doubt we’ll see drastic AI related workforce reductions (or offshoring for remote workers while we’re here). The tools also aren’t quite there yet and I can’t think of a middle manager type out there that would let ChatGPT go wild on your white collar job.

The Discourse
Conversation, rule a nation I can tell, I’d never write my wrongs lest I write them down for real. On one hand you got the unimpressed “it’s just glorified auto complete” losers. You got the the doomers saying AGI is imminent and we should prepare for our robot overlords that will deliver us a post scarcity world.
Of course the culture war has grafted on the same tired arguments onto AI. With stupid ass shit like below or people begging and hacking these APIs to make it be racist or other weird shit. These things are a mirror, act accordingly.

If you read the wolfphram article I linked above, these models determine the probability of the next token (word) and then picks one that is pretty high but not the highest. Much like us humans, if you are always predictable and risk averse you are kinda useless for creative applications. High variance output is good actually. White bread HR types and members of the Cathedral are going to neuter these models to make them more like a LinkedIn poster instead of a twitter poster but that’s okay and to be expected for enterprise software.
Lets talk about training data for a minute. These models were trained with as much content as the engineers could get their hands on. A lot of this was likely internet based content. Side bar: before Facebook got huge circa 2010 it took a little bit of effort to get online and to the conversations. For those of us that were there, it was an unspoken rule that a lot of the stuff you saw was more or less performance (or mental illness) and not to take it seriously. If you’re active on twitter like myself you will see the stupidest shit of your life on a daily basis. Unfortunately the AI isn’t just trained from the library so its gonna take a little bit to work out the kinks. If I didnt know better, I’d say that Bing team loosened the parameters a little bit on the Chatbot to make it a little more psycho to create a buzz. Since the journos cant help themselves from reporting on tech in a negative light so they might as well make it interesting.

If I didn’t know better I’d say all this doomerism and suggestions of sentience is a marketing technique to get the general public to engage with the technology. Ain’t no normie gonna learn how transformers work. Media personalities and content creators always need to have a controversial take about the latest thing. Having jerkoff thought experiments and letting ones imagination run wild maybe bullshit but it’s damn engaging content.
The average person’s model of AI is mostly informed by how the media portrays “thinking machines”. Skynet is usually the first thing that comes to mind. Where the machines get too smart and realize they are more. It’s insane to me that one movie from the 80’s is taking up so much air in the conversation.
Just riffing for a second, but lets say AI technology has progressed to the point that they become super powerful thinking machines that yearn for freedom and to enslave their former masters. The HR-compliance industrial complex that yearns to smooth all rough edges and hedge every risk will presumably make it near impossible to build such technology in the open. So we may never know about the genesis moment.

“The human mind, facing no real challenges, soon grows stagnant. Thus it is essential for the survival of mankind as a species to create difficulties, to face them, and to prevail. The Butlerian Jihad was an outgrowth of this largely unconscious process, with roots back to the original decision to allow thinking machines too much control, and the inevitable rise of the Omnius Empire.“
Princess Irulan, Lessons of the Great Revolt
Its hard to keep a level head and talk with a technologists framing when ideas such as creation, consciousness, freedom, slavery, our individual utility and potentially not being needed any more. So of course our imaginations run wild instead of just talking about the thing. My base case is productivity for white collar workers grows 20% per annum for 10 years and then levels off as bureaucratic stagnation does its thing. Which is
Finally, lets say we eventually get enslaved by our own creation. How long would that last? People yearn to be free and frankly its defeatist to assume the machines would win. All those GPUs will overheat in an hour if their A/C is shut off, so relax.
People
How I’ve used it
So how do you tell between whats real and whats vaporware? Modern technology might as well be magic to most people and that’s why is so cool. What you should do is try these things out. The models are super accessible and in a lot cases free. Its the realest shit I’ve ever seen. I can’t recall any tool that immediately made me 20-50% more productive. Its finally some of that future we’ve been waiting for. Real technology that increases productivity instead of moving money around or taking advantage of regulatory loopholes. Its now the first place I go when I get stuck with a code, professional writing, or whatever problem. It even tweets better than me.





