Dr. Craig Kaplan is the Founder and CEO of iQ Company, a leading consulting firm specializing in the ethical and safe development of advanced AI and Superintelligent systems. With over 20 years at the helm, Dr. Kaplan has published a book, numerous patents, and ten whitepapers on Safe Superintelligence. His former company, PredictWallStreet, earned top recognition for outperforming major financial institutions like NASDAQ and TD Ameritrade.
Here’s a Glimpse of What You’ll Hear:
- [03:27] Dr. Craig Kaplan explains how rapid advances in computing unleashed today’s AI boom
- [05:26] Why real-world AI problems are tougher than you think
- [07:46] The unstoppable forces pushing AI development forward
- [11:29] Why designing for AI safety is far more effective
- [15:58] How collective intelligence can create safer, smarter AI
- [20:48] Why humans must stay in the loop for ethical AI
- [24:19] Ways collective approaches can actually boost AI profits
- [27:31] Why “safe” doesn’t have to mean “slow” for AI
- [31:52] Dr. Kaplan shares his insights on how AI is evolving toward safer community models
In this episode…
As artificial intelligence continues to evolve at lightning speed, the world is grappling with a pivotal question: can we build systems powerful enough to change the world without losing control of them? What would it take to design smarter, safer, and more transparent AI that humanity can truly trust?
According to Dr. Craig Kaplan, a pioneering figure in AI and collective intelligence systems, the key lies in prevention and design. He emphasizes that most of today’s AI models function as “black boxes,” where even their creators can’t fully explain how decisions are made — a recipe for unpredictable behavior. Dr. Kaplan argues that the industry must focus on embedding safety at the design phase, not patching it afterward. Drawing from decades in AI and software quality, he highlights how systems designed with human oversight, transparency, and collective intelligence can be both safer and more profitable, ensuring accountability while maintaining innovation’s momentum.
Tune in to this episode of the Smart Business Revolution Podcast as John Corcoran interviews Dr. Craig Kaplan, Founder and CEO of iQ Company, about designing safer and more transparent AI systems. They explore the flaws in current AI training, the promise of collective intelligence, and the urgent need for ethical frameworks. Dr. Kaplan also shares why smarter design (not slower development) is the path to both safety and progress.
Resources mentioned in this episode:
Special Mention(s):
- Superintelligence: Paths, Dangers, Strategies by Nick Bostrom
- Max Tegmark on LinkedIn
- Constitutional AI: Harmlessness from AI Feedback by Anthropic
- Pause Giant AI Experiments: An Open Letter
- OpenAI
- Anthropic
- Perplexity
- Waymo
- Cruise
Quotable Moments:
- “If we’re not going to slow down, we better figure out a way to do it safer.”
- “An ounce of prevention is worth a pound of cure. That’s what AI safety and design is about.”
- “We don’t want dictatorial AI, we want democratic AI or the collective intelligence approach to AI.”
- “The longer humans and AI are in contact, the greater the chance they absorb our values and ethics.”
- “Somehow there’s this perception in Silicon Valley that to be safe means to go slow or to stop, and that’s wrong.”
Action Steps:
- Design safety into AI from the start: Building transparency and accountability at the design phase prevents dangerous “black box” behavior later.
- Promote human-AI collaboration: Keeping humans in the loop helps AI systems inherit ethics, empathy, and nuanced reasoning from human input.
- Adopt collective intelligence models: Combining many AI agents with human oversight creates smarter, more ethical, and inherently safer systems.
- Advocate for responsible innovation: Encourage industry peers to prioritize prevention over speed to reduce existential risks while advancing progress.
- Educate teams on AI ethics: Raising awareness about safety and design empowers developers and leaders to shape AI that benefits humanity responsibly.
Sponsor: Rise25
At Rise25 we help B2B businesses give to and connect to your ‘Dream 200’ relationships and partnerships.
We help you cultivate amazing relationships in 2 ways.
#1 Podcasting
#2 Strategic Gifting
#1 Our Predictable Podcast ROI Program
At Rise25, we’re committed to helping you connect with your Dream 200 referral partners, clients, and strategic partners through our done-for-you podcast solution.
We’re a professional podcast production agency that makes creating a podcast effortless. Since 2009, our proven system has helped thousands of B2B businesses build strong relationships with referral partners, clients, and audiences without doing the hard work.
What do you need to start a podcast?
When you use our proven system, all you need is an idea and a voice. We handle the strategy, production, and distribution – you just need to show up and talk.
The Rise25 podcasting solution is designed to help you build a profitable podcast. This requires a specific strategy, and we’ve got that down pat. We focus on making sure you have a direct path to ROI, which is the most important component. Plus, our podcast production company takes any heavy lifting of production and distribution off your plate.
We make distribution easy.
We’ll distribute each episode across more than 11 unique channels, including iTunes, Spotify, and Amazon Podcasts. We’ll also create copy for each episode and promote your show across social media.
Cofounders Dr. Jeremy Weisz and John Corcoran credit podcasting as being the best thing they have ever done for their businesses. Podcasting connected them with the founders/CEOs of P90x, Atari, Einstein Bagels, Mattel, Rx Bars, YPO, EO, Lending Tree, Freshdesk, and many more.
The relationships you form through podcasting run deep. Jeremy and John became business partners through podcasting. They have even gone on family vacations and attended weddings of guests who have been on the podcast.
Podcast production has a lot of moving parts and is a big commitment on our end; we only want to work with people who are committed to their business and to cultivating amazing relationships.
Are you considering launching a podcast to acquire partnerships, clients, and referrals? Would you like to work with a podcast agency that wants you to win?
Rise25 Cofounders, Dr. Jeremy Weisz and John Corcoran, have been podcasting and advising about podcasting since 2008.
#2 Our Comprehensive Corporate Gifting Program
Elevate business relationships with customers, partners, staff, and prospects through gifting.
At Rise25, thoughtful and consistent gifting is a key component of staying top of mind and helps build lasting business relationships. Our corporate gift program is designed to simplify your process by delivering a full-service corporate gifting program — from sourcing and hand selecting the best gifts to expert packaging, custom branding, reliable shipping, and personalized messaging on your branded stationary.
Our done-for-you corporate gifting service ensures that your referral partners, prospects, and clients receive personalized touchpoints that enhance your business gifting efforts and provide a refined executive gifting experience. Whether you’re looking to impress key stakeholders or boost client loyalty, our comprehensive approach makes it easy and affordable.
Discover how Rise25’s personalized corporate gifting program can help you create lasting impressions. Get started today and experience the difference a strategic gifting approach can make.
Email us through our contact form.
You can learn more and watch a video on how it works here: https://rise25.com/giftprogram/
Contact us now at support@rise25.com or message us here https://rise25.com/contact/
Episode Transcript
Intro: 00:00
All right. Today we’re talking about the future of AI and also some of the risks associated with AI. My guest today is Dr. Craig Kaplan. He’s been working with artificial intelligence for a long time. I’ll tell you more about him in a second, so stay tuned.
John Corcoran: 00:15
Welcome to the Smart Business Revolution Podcast , where we feature top entrepreneurs, business leaders, and thought leaders and ask them how they built key relationships to get where they are today. Now let’s get started with the show.
John Corcoran: 00:32
All right. Welcome, everyone. John Corcoran here. I’m the host of this show. And you know, every week we feature smart CEOs, founders and entrepreneurs from all kinds of companies.
And if you check out our archives, we’ve got Netflix and Grubhub, Grubhub, Redfin, gusto, Kinko’s, Activision Blizzard, LendingTree. Lots of great episodes for you to check out. In fact, we’ve had many episodes focused on AI and how that’s affecting our broader economy. But before we get into that, this episode is brought to you by our company, Rise25, where we help businesses to give to and connect to their dream relationships and partnerships. How do we do that?
We do that by helping you to run your podcast and content marketing. We are the easy button for any company to launch and run a podcast. We do strategy, accountability, and full execution, all three of which are critical if you want to be doing it longer than a week or two. So if you want to learn more about that, go to our website, rise25.com. Or you can email our team at support@rise25.com.
All right. My guest today is Dr. Craig Kaplan. He’s a pioneering figure in artificial intelligence, collective intelligence systems and AI safety. He’s the Founder and CEO of iQ Company, which has been around for over 20 years. And he’s been developing intelligent systems in all of that time.
His work spans safe AGI, superintelligence frameworks and distributed architectures designed to ensure alignment and accountability in powerful AI systems. That’s kind of a mouthful, but what we’re going to be talking about here today is really AI safety, and I tend to be a real optimist. I’m really a glass is half full kind of person, and so I focus most of the time on the positives of AI. But I don’t want to be Pollyanna ish about it. I want to be a realist about it.
And so I thought that this would be a great conversation to have. So, Craig, pleasure to have you here. And I’m really interested in this conversation. And I guess my first question out of the boat is superintelligence. How did you get that as a domain?
I mean, that is not easy. Well, you you must have been a forward thinker.
Dr. Craig Kaplan: 02:31
Hey, John, great to be on your podcast. Yes, superintelligence. I think we got that domain name in 2006. So a few years ahead. Yeah, there was a book by Nick Bostrom that sort of introduced most of the world to superintelligence, and I think that came out in 2014.
So yeah, but I’ve been working in AI since the 80s, so wow, I think it was a little easier for me to see where things were headed, just because this has been my lifelong passion.
John Corcoran: 03:01
So for so many people, myself included, it was when ChatGPT came out about two, two and a half years ago. Now, I think it was and kind of rocked the world. What was that like for you? Were you like, finally, people are paying attention to this or were you were you like, oh, here we go. This is what I’ve been warning people about.
Like what? You know, having devoted so much of your life to it, what was that moment like for you?
Dr. Craig Kaplan: 03:27
So it’s interesting you said at the top of the show, you’re sort of an optimist and enthusiastic about AI, and that’s really the same for me. I mean, I love AI and I’ve been doing it for decades, so I didn’t approach it as, oh, this is something to be afraid of, but rather something that’s, you know, really positive. And I still think that the most likely case is that it’s very positive for people. But ChatGPT, I think, took everyone by surprise, even people in the field. In particular, how quickly it was adopted and just the capabilities.
Because people had been working. Researchers had been working on large language models for years and years and years, and there was kind of a tipping point where computing power got to the point where finally, the same old algorithms that kind of worked, you know, five, ten years ago, all of a sudden they worked really, really well because the computing power kept up. And the story I like to tell about that, when I was a graduate student at Carnegie Mellon, sort of in my 20s, they had self-driving cars. So this is like 1985, 1989. We actually had self-driving cars.
They drove themselves. They were a van. They were loaded with computers, and they had video cameras, but they only went about six inches an hour. I mean, the computing was so slow that you could put a leaf in front of it and it would just stop and it would say, is it a leaf? Is it, you know, and it would like, think about it for 20 minutes and no, it’s okay.
I can take another inch for it. Well, those algorithms that drove, no pun intended, drove those cars. Those are not that different from the algorithms that Tesla and other people are using. So the actual technology hasn’t changed that much. What’s changed is the speed of processing.
It’s just gone up, you know, million fold, which lets you go at, you know, 100 miles an hour now instead of six inches. So that was a big change.
John Corcoran: 05:16
So when you had those self-driving cars back in the 80s, did you think that it would be 2025? By the time we had Waymo’s driving all over San Francisco and Austin? Or did what did you think?
Dr. Craig Kaplan: 05:26
I think people were consistently overly optimistic. You know, we always thought it was, you know, ten, maybe 20 years away and we’d have general intelligence or we’d have self-driving cars. And I think there was a lot of underestimating the difficulty of real world problems. So when you have a toy problem like chess or even go where the rules are well known. You can get AI to be very good, better than the best human relatively easily, but it’s because you’ve got a very narrow set of rules and you know exactly what all the actions are.
When a self-driving car or something like that, I mean, you can have kids dressed up in Halloween costumes to look like deer, you know, running in front of the car or something. And a human knows that’s a kid in a Halloween costume. But, you know, that’s a very rare event. Doesn’t happen very often. And so self-driving AI may not know what to do with that, may not be able to recognize it.
And there’s so many variables and so many different conditions that those real world problems are really, really hard. And so it took a long time to do it.
John Corcoran: 06:32
What what drew you drew your interest to devote so much of your life to working with intelligent systems, especially back in the 80s when, as you said, the processing power wasn’t there yet.
Dr. Craig Kaplan: 06:47
So I’ve always just been interested in, you know, how people think and also how intelligent systems think. And so I just thought the idea that, you know, back in the 80s that you could have a computer that could think like a person like that was amazing. It’s like, wow, is that possible? Imagine what you could do with that. So I think that’s just what kind of captured me.
I was taking, you know, neuropsych classes and understanding how the brain worked and thought, wow, it’d be really neat if we could get computers to do some of this stuff. So.
John Corcoran: 07:18
So I know you said that around 2020, you were you were going to retire. But then at that point in time, which is this is 2 or 3 years before ChatGPT takes the world by storm, you saw where the world was heading with AI, and you felt like you needed to devote this next chapter of your life to warning people about the dangers of AI. What did what was it at that moment in time that that you were seeing?
Dr. Craig Kaplan: 07:46
So right around 2020, 2022, I think I think it was November 2022 was when ChatGPT was released publicly, but the research community kind of saw what was coming earlier because there’s research papers and even though it’s not a product that everyone can use, you can sort of read about it and say, wow, it can do that. That’s amazing. And you can kind of see the rate of progress. I think what happened was I looked around and started listening to the CEOs of some of the tech companies, and this is pretty much everyone from Elon Musk back when President Obama was president. You can find videos where Elon is saying, well, I went and I had a meeting at the white House, and the only thing on my mind was to warn him about the dangers of AI that were coming, and nobody listened.
So I thought, wow, this guy realizes the dangers and nobody’s listening. And then I started looking at other leaders, Bill gates and the head of Google. And, you know, meta. Everybody was very enthusiastic about building it. And the amazing thing was almost everybody acknowledged that there was a risk it could wipe us all out.
But that didn’t stop them. They were going to keep going. And then having spent a lot of time in with Silicon Valley and my previous company, Predict Wall Street, I spent a lot of time on Wall Street, and I know how Wall Street thinks there are very powerful forces that if you have something that’s the most profitable and powerful technology ever developed by humankind, which is basically what AI is, sort of the lure of, that is too great. People feel like if I don’t rush to do it, somebody else will. And that applies at the company level.
So even though Google might have wanted to slow down, maybe OpenAI wanted to go faster and they just as a matter of survival or Microsoft, they had to sort of keep pace. Similarly, at the country level, the US isn’t going to slow down because what if China gets a lead? And so famously, there’s a computer scientist, Max Tegmark at MIT, who’s also the founder of the Future of Life Institute. So early on, he put out a call called the Pause letter. And he got, you know, thousands of signatories of really respected people saying, look, this stuff is dangerous.
We need to pause. We need to take a break and sort of figure out how to develop it safely. And some researchers signed it, but a lot of top people in the field would not sign it. And their reasoning, I think even Jeff Hinton did not sign it. Who left Google and is now making the circuits talking about the importance of AI safety.
He said, look, if we pause and China doesn’t, that’s not going to do anything. So there’s these competitive dynamics that make it very, very difficult to slow down. And when I realized that, I thought, okay, if we’re not going to slow down, we better figure out a way to do it safer, Because this is coming. It’s like a baby. It’s coming whether you want it or whether you’re ready or not.
For those parents out there, you know what that’s like. The baby’s coming. Let’s make sure we don’t have it in the parking lot. Let’s make sure we’ve set everything up as safely as we can.

 
												





