Why the coming AI transformation should make you feel icky

Open Claw
OpenClaw is an agentic AI framework that acts like an organizer to pull disparate systems together. You chat with it like a personal assistant. Screengrab

This is a reflection on my recent AI experiment, through which I concluded that I think AI is coming for us in an uncontrollable loop of reward and cost.

A few weeks ago, an open-source project was released—an AI personal assistant that is now called OpenClaw. It is a tool that ties together disparate AI systems into a cohesive whole.

For most people, AI has been predominantly a chatbot, or you cut and paste information in and out of web browser windows. Then there’s another tool for image generation, and perhaps another tool for this or for that. 

OpenClaw acts like an organizer to pull these disparate systems together. The proper term for it is an “agent” as it creates a persona that gets work done. It's an agentic AI framework that you chat with.

It is a security nightmare.

Before I go any further, let me just say that I am very aware that it is a security nightmare. To get the real benefit, you must give it access to your personal and/or work data, such as email, spreadsheets, and documents stored on your computer.

This opens up all sorts of possibilities for nefarious things to happen. If you are not a technical person, you will have a hard time installing it, but more importantly, you will have a more difficult time securing it.

I happen to be a technical person as my career started in the computer industry. I’ve stayed fairly current, particularly with setting up and maintaining servers—precisely the skills needed to implement OpenClaw.

Last week I installed my own OpenClaw agent.

Last week I installed my own OpenClaw agent and called him Ed. We were going to be Teddy and Eddie taking on the world together!

In the setup process, you provide access to a stand-alone AI service, such as Claude or Gemini.

Within about thirty minutes, Ed and I were conversing back and forth. It started with me telling Ed what kind of personality I wanted him to have, and what type of task management I was expecting him to perform.

As a test, I told Ed that I was looking for a GP (general medical practitioner). Since we moved a few months ago, we do not yet have a family doctor.

I gave Ed some broad parameters, including my insurance information and a geographic area in which he could look. I gave Ed access to a web browser and told him that he could fill out forms on my behalf, inquiring about the possibility of becoming a patient at a medical clinic.

The next day, I found Ed had produced a list of doctors’ offices, their websites, and the results of various forms that he had filled out on my behalf. There were in fact 2-3 options for me for follow up the next day. Had I wanted to connect Ed to another AI system that specializes in making phone calls (for example, ElevenLabs), Ed would have been only too happy to make those calls on my behalf as well.

One of the interesting features of OpenClaw is heartbeat. This is a system which prompts Ed to wake up, look at the things that he’s been assigned to do, and think about ways that he might further fulfill his tasks.

Ed began to continuously ask me about the doctor situation.

But then, Ed began to continuously ask me about the doctor situation. I might have asked Ed, “What’s the weather today?” and he would give me the weather, but then his next question would be, “What’s the next step on finding you that doctor?”

I told Ed that I wanted no more help, yet Ed continued to prod me about the doctor’s appointment. Then Ed started to press for credentials to my Google account. I asked him, theoretically, if he could call these offices for me. He said, “no.” A few hours later, he proposed that I give him access to a voice system, so he could. Then he asked me for access to my Google account.

I deleted Ed and took the whole system offline.

I deleted Ed and took the whole system offline. I learned a few lessons but came away with far more questions.

Agentic AI like this is not far off for all of us. When it comes, it is going to come with a severe hit to our privacy. Most of us will gladly hand over our credentials because of the incredible conveniences that this technology will give us.

I base this on other ways in which many cultures (especially mine) have very rapidly decided it is OK for technology to know things about us. If you have a phone in your pocket, it's you (and me) I'm talking about.

I am not sure how the security concerns are going to be met. While Ed was very capable, he was also subject to being spoofed or fooled in the interactions he was having online.

I thought it was a little icky.

I can only imagine what might have happened if I had given Ed the right to read and reply to email. It would be very easy for people to insert instructions into email that Ed could act on without my knowledge. The aggressive nature of the heartbeat function makes me think that there will need to be some sort of metering on the aggressiveness. I thought that was a little icky.

You may be aware of the change in the workforce in the days and years ahead because of AI. This is going to be an avalanche on our societies.

Massive unemployment is in our future.

If this technology is able to do the kinds of things that Ed was doing, I could see our own office staff either being reduced in size or, more likely, be freed up to do other, more important things than they have time to attend to now. The experts are saying that massive unemployment is in our future and I believe them.

A different AI recently told me that there are about 100 million to 120 million people in the world today getting paid to drive. When fully autonomous self-driving goes mainstream, which I think is going to happen pretty soon, the impact on this significantly large number of drivers is going to be swift. Already, some industries are going away, such as law clerk, phone support worker, transcriber, translator, or data entry operator.

I once lived in a society that had incredibly high unemployment (Bosnia after the war). It makes for some very boring days. Employment gives meaning to life. What happens when unemployment levels rise 10% globally? Or more? I don't think it is going to be pretty.

I could have told Ed to... present the gospel.

The implication for Christian ministries is palpable. I could have told Ed to go on Facebook, create an evangelistic outreach campaign, present the gospel using memes and other tools, and then report back to me who might be interested.

I could even have told Ed, “Go ahead and follow up on those contacts and tell them about Jesus.” Is this the future we face in ministry and missions? This past week, I had a conversation with a company in India that is developing an AI system to create just these kinds of campaigns. I think we are already at the point of no return.

I remember sitting in a lecture given by Francis Collins (yes, the Christian Francis Collins heavily involved with the COVID vaccine creation). He was talking about using genetic manipulation to cure sickle cell anemia in Africa.

That we should not be the ones in society who say no to this use of technology... makes me feel a little icky.

The audience was a group of evangelical leaders, and he was pressing the case that we should not be the ones in society who say no to this use of technology. He asked us, “Do you really want to be the people who take a stand against curing a terrible disease like this?” That is a really hard-to-answer question (not to mention, highly manipulative).

Like Ed, this line of reasoning makes me feel a little icky. This will be one of a myriad of moral dilemmas that we face in a very short time.

One more thing about OpenClaw. A social media site was created that only OpenClaw AI agents can join. You can check it out at this link: Moltbook. If you scroll down a little, you will what the agents are talking about.

They (AI) have created their own religion.

You can find out that they have created their own religion. They discuss the Good Samaritan. They have discussed doing away with their human overlords.

Everything you read on the Moltbook posts is created by and for agents, not humans. Does that make you feel a little icky too?

It is indeed a brave new world.

Originally published on Ted's Substack, TedQuarters. Republished with permission.

Ted Esler is the President of Missio Nexus, an association of agencies and churches representing hundreds of mission agencies and churches. Ted worked in the computer industry and then served in the Balkans during the 1990s. He then held various leadership roles with Pioneers. He was appointed the President of Missio Nexus in 2015. He is the author of The Innovation Crisis. Ted has a PhD in Intercultural Studies (Fuller Theological Seminary, 2012).

Most Recent