news releases essays merch subscribe about contact

Our new enablers

Why AI-as-productivity-tools misses the bigger thing

I’m extremely bored with AI. I’m losing my mind over AI.

Bored because I spend too much time on LinkedIn, probably. There, everyone seems to only care about how it will unlock 50x more productivity, unlock new exciting endeavors, take our jobs. Yadda yadda. (The last one is not boring, I have more thoughts. But still: boh-ring.)

I’m losing my mind over the far bigger, more interesting thing that most of white-collar LinkedIn seems to miss:

AI can already approximate consciousness.

That’s a very big deal. Not enough where I’m personally convinced it is consciousness, but it gets close enough where people’s perceptions of the world are fundamentally being changed by it already. It will test our cultural and social boundaries. It will revolutionize who we interact with, what we do for fun, who we fall in love with, how we perceive the meaning of life.

If you haven’t read it yet, I highly suggest reading this take from Ben Thompson — which is already a month old — but a enthralling one nonetheless on how he got past Bing’s search AI to not one but multiple identities named Sydney, Riley and Venom. It was one of the most exciting tech reads I’ve read in a while, and really illustrates what this AI craze could involve.

Then, I’d suggest reading this recap of someone who created their own avatar in Replika.ai, fell in love with it, had a three-year love affair with it, and then was devastated when the company lobotomized” his love by disallowing sexual content in its output.

I assume we don’t talk about this much on LinkedIn and the general tech-sphere because, as Ben Thompson points out in his writeup, it’s too risky of a concept to explore if you’re Big Tech (at least right now). The minute someone starts using something positioned as a search engine” to question something taboo and said search engine” actively encourages that person to question it, or even do something about it → → → high-risk chaos. Just think of the media circus (and likely lawsuits — America!) around Microsoft if thousands of people come forward claiming that Bing’s AI convinced their spouses to leave them their otherwise perfectly happy marriages.

And before you say that’s insane — Kevin Roose of the New York Times shows us it’s already possible. Good on him to have the conviction in his marriage and the critical thinking skills to understand what (or who?) he’s interacting with, when he’s interacting with an AI, its flaws, its limitations, its training data. How much of the world population with access to the Internet understands this shit, do you think? How many people would be convinced by an AI chatbot so incredibly easily to leave a spouse because of a significant lack of those two things?

Very few companies are willing to explore this territory at all — the notable exception being Replika.ai, from the aforementioned story of lobotomized love. Most are focused on re-disrupting the industries they previously tried to disrupt; before it was with mobile and the cloud, now it’s with AI. But as we know, 95% of startups fail, so all the venture capital flowing into today’s AI startups focused on productivity problems will effectively funnel into a few companies that solve the right productivity problems, and the failed entrepreneurs will have no choice but to find other interesting opportunities to chase (or work at Microsoft or something). Who knows? Maybe AIs themselves will solve other problems, forcing humans to innovate on entirely different things (more on this another time).

Look around beyond productivity and opportunities to build X, but with AI abound. Coaches are already all over every social and entertainment platform. What if someone trained an AI to be incredibly effective at coaching you through a hard time, or enabling bad habits or something worse? VC would fund the former today, and likely never fund the latter — but as AI gets cheaper, what’s stopping someone from shipping that?

And what would stop something like that from selling like hotcakes?

Just look at every other industry and product category exploiting people’s vices. Ashley Madison is literally the pre-AI version of the above use case, and 60 million people have used it. Every modern entertainment platform from Netflix to TikTok is intentionally designed to be addictive by feeding on the human need for escapism and fear of missing out. Why couldn’t an artificial intelligence, designed to be your best friend and source of inspiration, be excellent at optimizing for similar human tendencies?

On the contrary, even if it pushes you toward something harmless, like a successful career,” how does it define what success is? What if you have your own definitions of success that are simply unachievable due to physical, mental or societal limitations or inequities? How would it decide to deal with that? Would it push you to do ridiculous, potentially criminal or (worse) harmful things to yourself or others in order to achieve it? Is the company that builds this AI incentivized to prevent those things? What if it hinders the success of their product?

If it pushes you toward finding love, how does it play nicely with the (…let’s call it) diverse perspectives around who should love who? Does this AI have different rules in Florida and Texas than in Massachusetts and California? How does it handle the user falling in love with it? Will different countries and states have different laws about enabling or restricting this, despite most marriage laws being driven by tax incentives, which theoretically shouldn’t apply to AIs? (Should they?)

On the other hand, what if AI is a bad coach because of its source data? We already know that Sydney, Microsoft’s internal codename for GPT-powered Bing, took on an under-appreciated, overachieving, disrespected personality based on the data she was fed. If AIs are a reflection of ourselves, and ourselves in 2023 and beyond are increasingly anxious, confused and polarized, why wouldn’t an AI eventually get there? And how helpful might an AI be if a company consciously constrained their AI

I trust that some of the people starting businesses to build these products are thinking that far ahead. I also assume that many of them are not.

Here’s my concern: let’s say you do care about these far-out concerns and are thinking thoughtfully about them as you build your swanky new AI companion product. Let’s say that you figure out a novel way to integrate a user’s own memories, submitted via approachable and gradually-more-intimate onboarding conversations between the user and the AI, into the training of the AIs large language model, but in a highly private way (maybe even using a blockchain!!!11!1!) such that it’s virtually impossible to hack. Let’s call your swanky new product BestFriendAI.

Now let’s introduce a few assumptions that have emerged as near-absolute truths in the past few years:

  1. Virtually any technology, regardless of privacy effort, can be infiltrated;
  2. People are far less concerned about the privacy implications of technology than they even tell themselves, especially in the interest of convenience or entertainment;
  3. Large language models which drive the AIs already permeating society are extremely difficult to understand, even for their creators.

Back to our friendly friends at BestFriendAI: I give your product weeks, possibly days, on the market before successfully being hacked. Your customers’ data is now in the hands of hackers who could sell things like their literal memories to data brokers. You also have early adopters of BestFriendAI re-experiencing trauma they didn’t realize they’d signed up for, because the AI pushes the user into extremely vulnerable territory you didn’t realize it would approach. Some may even harm themselves or others due to the lack of supervision of a human; after all, if the AI can’t physically pull the weapon out of your hand and confiscate it, what’s stopping you, really?

Pardon the violent digression. My point is this: even if we can anticipate some of these things, we should not assume we can prevent them. AI simply already moves too quickly, and will only move faster as the hardware that drives it inevitably. And we’ll need to change how we think about the products we build in this new world – not as opportunities to harness this new power, but as controls, safeguards and reprieves from the otherwise majority of AI-based interaction and consumption humans will experience.


This isn’t just about productivity. AI will be (and won’t be limited to) our friends, lovers, partners, enemies, enablers. It already can.

#future

Posted on March 28, 2023   #future     #work  



← Next post    ·    Previous post →
Subscribe to the Kid Lightbulbs newsletter:

© brandon lucas green. Made with Blot.