Sam Klein: 13:00
Told you about this thing. I want to.
John Corcoran: 13:02
See.
Sam Klein: 13:02
This.
John Corcoran: 13:03
I’m happy to share this.
Sam Klein: 13:04
Keep it on. No, no, you keep it on. Lock the key, I understand.
John Corcoran: 13:08
No, I’m happy to show you the master prompt here. I’ll pull up a screen and I can show it to you here in chat. And then. So. But I do it a little bit differently from the way you just described.
So what you basically said is you tease it out from people, talk through what the steps are. And then once they’ve gotten all that out, then they would like to use something like the master prompt to have the AI rewrite the prompt. Whereas what the way I’ve kind of approached it is I, I spit out this master prompt, which I’m, because you’re forcing my hand, I’m going to be sharing with the world here, but basically it is I, I, so I use text expander, which spits out the master prompt. So this is the master prompt here. It says I, I want you to become my, my prompt creator.
Your goal is to help me craft the best possible prompt for my needs. The prompt will be used by you too. I will follow the following process. One your first response will be to ask me what the prompt should be about. Blah blah blah.
So I do this whole thing and then say understand? I type it and then it just immediately says, okay, what’s the prompt about? And so I can just say, yeah, I want you to coach me on my personal finances and saving money to buy a yacht. How about that? But then it’s interesting because right after that, it’ll come up with some interesting things.
Your role is assessed by the current financial decision situation. Identify wasteful spending and savings opportunities. Help me optimize cash flow income. It comes up with some really interesting things Sometimes that’s a lot. And then it has suggestions.
Here’s some other things you might want to do. Here’s a question that you might want to answer. And so sometimes I’ll go a couple of times. It depends on how much time I have. But sometimes I’ll go back and forth a couple of times answering these questions.
It comes up with a lot of questions. So it’ll just keep on asking me questions like almost like it almost never comes to a final prompt. So finally I just give up and then, you know, come up with something.
Sam Klein: 15:05
Yeah. So I think, and this is one of the things just about LMS in general, right? You, you’re trying to sort of suss out your idea and work through it and it wants to give you the result. It’s just like, oh, I just want to do it. I can’t help myself or I want to ask you a million questions.
And so I think for some people, the response from the LM is just so great that it’s hard. You’re now looking at the screen going, okay, I need to read like 13 paragraphs to make sure that this is right. And so they don’t dig in. And what ends up happening is AI starts to drift because that’s kind of the natural thing, right? It’s a prediction model.
So it’s predicting what it thinks is the best. And so that first prompt that you put in I liked what you were doing there. I’m doing something similar. I have this thing called Cosmo that is a prompt builder. And what it does is it basically asks for the contacts, which you just gave, but then it goes through a series of questions without giving you a laundry list of things.
And it is very specific as it goes through so that it pulls out more context from you and it creates a really good prompt for that particular task. So that first prompt is really important because it sets the foundation for your conversation with the model. Remember the LMS has too much information. And we want to hone the thing in, right? So that we get a better result.
But what we’re talking about though is not a super prompt. So, even using your prompt right there, what I would still do is I would use that prompt. I’d say, hey, I want to you know, ultimately my goal is to build a presentation, but first I need to work through my research, and then you’re going to work on a prompt that is very specific for research. And that may be a really powerful prompt that you get to write, but you’re still breaking apart the task so that you’re getting the most rich elements within each, each layer. And you also have to remember you, you are an advanced user.
Yeah. Again, what we talked about, 95% of people touch this stuff. Like they touch search, you know, and so we’re.
John Corcoran: 17:14
Trying to build a knowledge base level of knowledge. You know, I think when I started using ChatGPT at the end of 2022, when it first came out, I used it for search for at least like a year or so, you know, like it’s just a super powered search. Yeah.
Sam Klein: 17:29
Yeah. And so, you know, people are putting in this really limited amount of information. It’s limited context. It’s giving the model and they’re getting kind of poorly poor results or are there. They’re like banging the damn thing back and forth, trying to get it into a middle place where they’re happy.
And that’s just hard work. It’s a lot of work. And so they get exhausted, they get tired, and then they realize, you know what? I could have built this thing by myself. I’d be done by now.
Yeah. Why am I using AI, this thing? And so that’s the hump that you do. We do have to get people over, right? Is this, you know, when you hire somebody, it’s usually more work than it is easy in the very beginning.
Yeah. It’s the same, unfortunately, with AI. It’s the same. You gotta put the time in. And if you do, you’re going to start to see some really great results.
But the quick way to start building that momentum is little workflows. And a lot of times, again, remember I’m talking to organizations. So we’re working with people who are trying to understand, you know, from an executive level, how do I get people either at the bottom working on certain things or top or bottom up? Like, how do we get the executive team working on other things? And, you know, a lot of it goes down to like, if you’re breaking down, process, the people that are actually doing the process know the process.
And so if you can teach them to do little workflows, right. If they can understand little prompts that kind of pick off all the little things that they have to do to get something done, what ends up happening is you’re starting to build all these little things that now you can look at from a top down and say, okay, wait, this work, this, this is working really well for the sales department. It could also work really well for the marketing department. How do we start to build more knowledge or more strength in that tool, instead of trying to build a tool from the top? And so I, you know, you’re talking a lot about like, if we can, if we can break things down into processes, it helps AI move faster, whether those are big prompts in each of those buckets or not.
John Corcoran: 19:25
And there was a report that came out, I think it was from MIT maybe about six months ago or nine months ago, that said that a lot of companies were not getting value out of AI. But I think that the conclusion was that what you were saying was that the problem was that these companies were doing a top down approach rather than a bottom up solution. And those who embrace the bottom up solution were far more likely to be successful.
Sam Klein: 19:49
Yeah. That’s correct. I mean, we kind of skipped right over education, right? We were treated and this is, you know, it’s natural. It’s a SaaS product.
John Corcoran: 19:58
So we treat everyone like talking about tools. They jump straight into it. Yeah.
Sam Klein: 20:01
Yeah. Well, we treated it similar to SaaS, right. You know, you buy Slack. They come in, they do some training on Slack, you hand it to your group and you know, you see a productivity gain and then it kind of just flatlines. And you don’t really give any more education.
People just keep playing with it and learning it. But AI is like an entirely different beast here. This is not just communication. This is not just, you know, contacts and CRM stuff. This is knowledge working.
Yeah. And so you need to teach people how to talk to it. It’s a different language. It understands English, but you have to give it the right context. You have to put it in the right format.
And so if you teach people how to talk to it correctly, you get a different curve than the SaaS curve. If you go build a tool right that sits on top and you give it to people and they, they don’t have to prompt very much to use it. You’re not, they’re not using SaaS or AI. They’re using a SaaS product that you built that has AI in it. If you teach them how to prompt, they will get better at prompting, which will allow them to go from little workflow tools to bigger workflow tools to building tools.
And so you gain not just your plateau at this like level that typical SaaS does, you start to actually see productivity just continue to increase and compound.
John Corcoran: 21:25
Which I was going to mention, so like Cosmo, your tool, which I don’t know, you’re welcome to bring it up and show it to us. But that’s a perfect example of someone who took a repeatable process that you were doing with clients, and then actually turned it into a tool that the clients could use. That one. It doesn’t take your time for them to go through it, and two, they can play around with it on their own time. So that’s a perfect example for your clients of ways in which they could do the same with some repetitive tasks that they have.
Sam Klein: 21:57
Yeah. So Cosmo is just very much, you know, I’ve had almost probably a thousand conversations like this one, right? With different leaders or, you know, employees or people in general just about AI. And so Cosmo is essentially the, the, it is what those conversations are, which are when I want to do something in AI, I have an idea and most people do so right. That good paragraph, that first paragraph, right.
Or they’ll or they’ll say it using Wispr Flow, which I highly recommend. And I do not get paid by Wispr Flow the best, but you should download that and use it. Start talking to your AI because you will give more context. But what it does is it allows you to give that first layer of context, and then it breaks it down into four parts, which usually are a good prompt to at least have these four parts, which is what AI is, right? What is it that you are actually trying to do?
Where is the information coming from and what is the ultimate output? And so what Cosmo does is it takes you through those questions, but it takes context from each of the steps as you’re going through and saying, oh, okay, so this is what you’re trying to do. Great. Now, who is the AI? And here are three ideas based on what you’re trying to do that the AI can be.
And so it feeds you ideas. But what it typically does is it triggers more ideas for you. So ultimately, it’s pulling out even more context than you even imagined you had on this particular subject. And then what it does is it will write a prompt, it will ask you or actually will tell you, it will give you choices. So it’ll say, hey, you know, based on what you’re trying to do, Claude is the best choice based on its model versus ChatGPT versus maybe perplexity in this instance.
And then you can either select one of those, or you can click to select an entirely different model. If you are using one and you’re like, no, I want to use this. Once you select one, it will rewrite that prompt in the perfect format for that particular model, because each one actually has little nuance difference.
John Corcoran: 24:06
That is something that I’ve not even thought about. I mean, I’m sure I just type it up the same way in each different model. You know, I don’t think that I approach it any different way.
Sam Klein: 24:15
Yeah. No, no, each one has its own little bit of difference. You know, Claude likes things more technical and that’s not surprising where chat actually likes you to leave in sort of the feel good, if you will. And so each model has a little bit of a difference. And as you also think about tools, you know, maybe you’re trying to build something and Lovable as a prototype to show somebody internally or build something for yourself.
There’s a certain way that also lovable wants that information. And so each one has these little things. And so once you choose whatever model you’re using, it will then rewrite the prompt in that model, but then also enhance it. And so it uses AI on the back end to then add additional context based on what you’ve written so that the model truly understands. And then you can read through it, add additional things, remove stuff if it’s needed.
And usually what ends up happening is that prompt is what you were talking about before. It’s a super prompt and it took you five minutes, but it is a very strong first prompt to lay the foundation. And what that does less turns with the AI, right? You don’t have as many questions or back and forth. And so for those that are keeping track of the token usage, as we were talking about before folks came on, lowers your token usage because you’re not going back and forth as much.
John Corcoran: 25:36
Yeah.
Sam Klein: 25:37
And also just gives people a better starting place. And so they’re able to do more complex projects quicker. And so yeah, it’s in beta with, you know, a number of my clients right now and we’re accepting more and people want to test it out.
John Corcoran: 25:53
Where can they check it out?
Sam Klein: 25:55
Reach out to me. Or you can look at Cosmo or cosmo.ai and just put in your email and I’ll shoot you an invite.
John Corcoran: 26:05
Great. Paul has a question, Paul. Go ahead and type in your question. If you say yes, if you want me to add you as a panelist, because we have a small group here. So I could add you as a panelist and you could ask your question, but up to you, you can either type in the question or I will add you.
Like he said he’d like to test out Cosmo.
Sam Klein: 26:25
Yeah. Sounds good. Paul. I’ll make sure you get it. You get it, my friend.
John Corcoran: 26:29
There you go. Cool. Yeah. One thing, and I don’t know if this is solvable, but one thing I’ve noticed, this has been mostly happening in Claude. Claude is definitely.
I’ve been using Claude as my IT support for any kind of technical issues that I’ve been having recently. And it can be great at it, but other times it can take me down a whole rabbit hole. I don’t know if you’ve noticed this, but like, it’d be like, oh, I think we can solve it this way and it’ll just go down one direction. And we’re trying all these different things. And then eventually it’s like, oh, actually that turns out that’s not the solution.
We’re going to go this other way. And then we go another direction. Sometimes we’re just like zigzagging. And I don’t know that it’s the prompt that caused it. Maybe it is because I wasn’t clear in the beginning, but sometimes I think it’s just kind of figuring it out as it goes along.
Yeah. I don’t know if there’s a solution for that yet.
Sam Klein: 27:19
So I call it conversational drift. It’s hard to spot. This is why you have to, like, tell everybody. Read what it tells you, even if it’s long. Like make sure you’re reading these things.
But what ends up happening, right? That first prompt is really important. So if it’s not set up well, then that’s where drift will happen more often and potentially in a greater like greater capacity. If you are changing directions at all in a particular chat, sometimes I will almost ask it to summarize where we are and start a new chat window.
John Corcoran: 28:04
Because it will copy that output of the summary and then start a new chat window. Yeah, yeah. Really?
Sam Klein: 28:11
Yeah.
John Corcoran: 28:11
And what is that?
Sam Klein: 28:11
What I’ll do is I’ll branch off of a chat. If I’m using chat GPT, you can branch off of a chat and sort of start afresh, if you will, with kind of where I want to go with it.
John Corcoran: 28:23
And the reason like you’ve figured out, oh, like, actually some of this stuff is garbage. I don’t want it to color its future thinking because we’ve, I’ve figured out what direction I want to go in now.
Sam Klein: 28:33
Yeah. It’s like the last output you’re reading and you’re just like, what is going on? Like this is or, you know, 80% is good, but this 20% I really hate either. I tell it like, no, and I’m very clear, like, remove this. I do not want this.
This is not the direction we’re headed. Right? Only look at these things. I’m very specific about what I want to continue to move forward with and what I don’t. And that can help.
But then also, if I feel like we’re about to go down that path, which is the one where you’re at, where it’s like, I’m just tangled up in this web of back and forth. I will, I will start anew, I will, I will ask it to tell me where we are right now. I will review what I think is good out of that. Sometimes I’ll take the whole thing. Sometimes I’ll take pieces of it, and then I will either start an entirely new chat and say, hey, this is what I’m trying to do.
I’ll give it the context myself, I’ll be like, hey, this is what I’m trying to do. Here’s where I’m at. This is where this is where it went wrong. This is what I’m working on. Here’s some information from the last chat.
Let’s start anew and then kind of move forward and sometimes just start anew. There’s something that it got caught in, in the back of its head, right at the very beginning that you didn’t see, and it just cannot let go. And so you’re going to bang your head against a wall for as long as you try. And so sometimes it’s just better fishing. Cut bait, like cut, cut it, move on.
John Corcoran: 29:59
Interesting.
Sam Klein: 30:00
Start anew and you’ll spend less time dealing with it.
John Corcoran: 30:03
That’s not an approach I’ve tried before, but I’ll definitely try that. That’s interesting. I want to ask you about, and I don’t know if you have an answer to this, but I feel like, you know, we’re kind of in this bubble of the San Francisco Bay area where a lot of people that we know are working for the big frontier labs, a lot of people are interested in AI outside of that bubble. Unfortunately, there’s a lot of negativity. I’ve been reading that AI has a very negative reputation, which is a little bit frightening because there could be a backlash against it.
Actually, we’re already starting to see that there’s something I heard about somewhere in upstate New York. There was some possible county that was going to ban any data centers, which may be where it starts. You know, it may be that communities say no data centers, you know, and it means no data centers, then that could really hold this back. I’m just wondering, you know, do you have any thoughts on that? Is there anything that people could be doing that could help for society as a whole to see the potential without being afraid of, you know, possible downsides?
And there, you know, let’s be honest, there are going to be some downsides.
Sam Klein: 31:11
Yeah. I mean, I think this is kind of like everything, right? We went through this when Google and search came available, a lot of the same fears, a lot of the same pushback, right from the education system to, you know, people, oh, we’re going to, you know, you know, you’re not going to be, you’re not going to use your, you’re not, you’re not going to use math anymore with computers. And, and you’re not going to all the, all these things. I think we kind of go through these similar waves.
If you, if you, go back through history and the way that we tend to get over those is that more and more people share their story of something positive, right? And so that helps people to start to see the benefits and opportunities. And so, you know, the encouragement is to share, like share often and don’t be shy about it. And, and you will find that, yeah, you may end up in a conversation where you’re like, well, that damn AI thing. I’m not, you know, I’m not interested in that.
That’s the devil or whatever. Yeah. You know, do, do your best. But, I think the more that we all talk about what we do see and the positives, the more that they will at least try and play with it. And sometimes they’ll do it after they’ve yelled at you and they’re going to go home.
They’re going to look around. They’re going, well, maybe I’ll just.
John Corcoran: 32:29
Maybe I’ll see what.
Sam Klein: 32:30
I can.
John Corcoran: 32:30
See.
Sam Klein: 32:31
Yeah. Yeah. And so, you know, I mean, I think it’s just encouragement. There’s some truth to a lot of their fears, right? There’s a lot of stuff that nobody knows as to what AI means for all of us.
I tend to always be the glass half full type person. I believe that, you know, this, this just it’s like jet fuel for, for us, we are, we can be smarter. We can do more amazing things. We can be more creative. We can unlock more ideas that we have personally.
And so the more that I use this, the more I see that future, that opportunity for us. So, you know, talk about it.
John Corcoran: 33:11
We tend to not see, you know, the things that we have to do over and over again that are repetitive or mundane or to suck our energy in. Except we see them in hindsight, you know, looking backwards. We see like, oh, didn’t it suck when someone’s job was to be in an elevator all day long, just pulling a lever and helping people go up and down, you know, and we realize, oh, okay, we can put some automation in and those people can go do something that’s not as mundane and boring, you know, and there’s lots of things that’s like that today, which we just, it’s outside of our worldview. Like we just think it, we just accept it, you know, but I think maybe a few years from now, we may look back and go, man, how awful it was that people had to just, you know, schedule other people all day long, you know, or they, they had to do something else that was mundane and boring, that could be sped up and that could free them up to do other things that are more scintillating and interesting and provides more economic value to society. You know, hopefully that’ll be the case.
Sam Klein: 34:08
Yeah. Hopefully. Right. And, you know, I got to believe so I also, you know, there are going to be so many industries and things that are born from the fact that we were able to use AI, and we don’t even know what those look like yet. You know, we’re still looking at the obvious because we’re only so close.
You know, our vision is only so far. And so we don’t even know what we’ve unlocked. And there’s going to be many, many, many breakthroughs on so many different levels that will open up new opportunities for new industries or, you know, more opportunities for businesses to, to use technology that was really hard or really expensive at one time, but now they can do a lot of it. And so that means it’s really great for people or for that business that we just don’t know. And so again, that’s why I say play, get to know it, get to understand it, get to know it on the, on the level of prompting so that you can stay in control of it.
So as it evolves and as it changes, you understand the like the bottom layer, the, the foundation of it so that you understand how the model is kind of changing and what’s happening.
John Corcoran: 35:19
Yeah, yeah. Then you can get to the more advanced stuff, like using an agent to do all of your work, or a team of agents to do all of your work, which is stuff that I’m still trying to work on. Sam. This has been great. Where can people go to learn more about you, connect with you, learn more about Cosmo and the work that you do?
Sam Klein: 35:37
Yeah, I mean, you can find me on LinkedIn, obviously. I also have a website, aiarchitechs.co. And then you can look at cosmo.ai. If you put in your email there, I’ll ping you an invite to the technology. Yeah.
Great.
John Corcoran: 35:56
Sam. Thanks so much.
Sam Klein: 35:58
Yeah. Thank you. John. This is great.
Outro: 36:03
Thanks for listening to the Smart Business Revolution Podcast. We’ll see you again next time. And be sure to click subscribe to get future episodes.
