MARDREAMIN’ SUMMIT 2025
MAY 7-8, 2025 IN ATLANTA - GA

Days
Hours
Minutes
Seconds
🎉 The Event Is Live! 🎉

NOW PLAYING

View the session live or catch the replay here. You’ll find the recording and all related resources on this page once available.

Looking for the Chat?

Our live discussions are happening over in Slack. That’s where you can connect with speakers, join session threads, and chat with other attendees in real time.

Leveraging Custom GPTs for Efficient Data Cloud Implementations in RevOps

In this presentation, I will demonstrate the custom GPTs I have developed to streamline and optimize Salesforce Data Cloud implementation. Attendees will learn about the no-to-low code approach I employ to build these tools, making Data Cloud setups more accessible and efficient for RevOps professionals. This session will provide a comprehensive case study and practical insights into harnessing AI for productivity and efficiency in the RevOps space.

This topic addresses the intersection of AI and revenue operations to showcase how innovative technologies can drive significant improvements in efficiency and productivity. By sharing practical examples and a no or low-code approach, this presentation will empower RevOps professionals to leverage AI tools, making complex implementations more accessible and effective.

SaaScend

Lara

Arnason

Solutions Architect

Keep The Momentum Going

Rethinking Lead Qualification and Stages: Building Funnels that Drive Real Alignment

Video Transcript

Speaker 0: Alright. Hello, MarDreamin. Welcome, everyone. We are so excited to have you all joining us today. My name is Rhona Mansour from Sercante, and I’ll be moderating today’s session. Before we get started, I have a few housekeeping items that we’d like to cover. So, yes, these sessions are recorded and will be available on demand after the event. We’ll also be following up with them via email, so don’t worry. And if you have any questions, uh, please post them in the q and a tab above. And lastly, please use the chat, share your emojis, GIFs, wherever you need to share. Let us know where you’re, uh, joining us from. We wanna hear from you. Um, now let’s get started. I’d like to introduce you to Lara Arnason, who has an awesome session ready for us today all about leveraging custom chat g p t custom GPTs for efficient data cloud implementation and revenue ops. So passing it over to you, Lara.

Speaker 1: Hello. Well, I know that you just introduced me, but I’m gonna introduce myself one more time. My name is Lara. Um, I’m a revenue operations consultant at Sysend. I came to the world of rev ops as an academic, so it may not surprise you to find that I love doing research on new technologies and how we can use them to improve our lives and the lives of our clients in revenue operations. I’ve been working with ChatGPT since it was first released, and today I’m going to share what we found out about some of the best ways we can leverage it for RevOps purposes. Specifically, I’m going to talk about how we can use custom GPTs to support the implementation of some of the more complex RevOps solutions like Data Cloud. And before I get going, I’d like to send a wholehearted thank you to our sponsors for making this happen and for having me here. It’s exciting as it is to be in the weeds of helping clients and solving puzzles. I’ve gained a lot from knowledge sharing in this community, so I’m grateful to be able to give back a little bit today.

Let’s get started with the agenda. Here’s what I plan to talk about. First, I’ll introduce the topic of generative AI and the impact it’s had on the workplace in the nearly two years since ChatGPT was released. I’ll talk about what we’ve seen in revenue operations and how we use generative AI to solve everyday challenges. Next, I’ll dig right in and show you with a case study. I’m going to tell you about a Data Cloud implementation that my team did under hard deadlines with many challenges and show you how we leverage custom GPTs to create our own tools and shorten timed implementation. After the case study, I’ll talk about custom GPTs, what they are, and why they’re useful for Data Cloud, and indeed any complex project that could use niche tools and strategic guidance. Finally, I’m gonna explain how you can do this yourself, um, and apply it to improve your productivity and understanding on a day to day basis.

The goals for this session include reviewing some of the common challenges that rev ops teams face with complex projects and specifically Salesforce Data Cloud implementations so that we can take a close look at how we can leverage custom GPTs for use as niche tools or strategic guidance to help meet these challenges. Special focus on Data Cloud. You’ll know by the end of the session how to create custom GPTs of your own to produce more useful outputs than Chat GPT on its own, which I’ll refer to as vanilla Chat GPT. We’ll also talk about how to co work with generative AI, which currently seems to be producing the most reliable results. As a bonus takeaway, if you send me an email request at the end of this session or after the session, I will send you a little library I’ve got of custom GPTs that I’ve created for use specifically for revenue operations.

Let’s get started. Now, before we start talking about impact, I want to make sure that we’re all talking about the same technology. So when I refer to generative AI, I’m referring to artificial intelligence that creates original content in response to a user’s prompt or request. Um, that’s the definition from IBM, to stand in contrast to other forms of machine learning. So unlike traditional machine learning, machine learning models that analyze data to make predictions or classifications, generative AI creates something new like text, images, code, or even music, um, based on what it’s learned from vast amounts of data. There are many examples of generative AI use today, and I’ve listed some of the most recognizable ones here. The most well known, of course, is TACGPT. You may also have heard about Cloud by Anthropic, Google’s Gemini, Microsoft’s Copilot, and Meta’s Llama three.

Now that we know what generative AI is, let’s have a look at the impact. When OpenAI released ChattoptyPT nearly two years ago, it was closely followed by a question that continues to be asked today, which is how will generative AI affect the workplace? In those two years, there’s been a strong sense of both concern and optimism. On the concern side, we’ve heard that generative AI is going to replace human jobs. We’ve also heard that it can’t be trusted because of its tendency to hallucinate. And on the optimistic side, we’ve heard what the revenue operations community is experiencing firsthand, which is that it can make us much more productive, and, um, it’s transforming how we work. So as individual workers and teams in rev ops, generative AI is augmenting what we can do and how fast we can do it. It has not yet replaced us, but it has changed us. Let’s have a look at how.

Research is in its early stages, but so far, studies suggest that it’s increasing worker productivity anywhere between 1455%. It’s also increasing the quality of work completed, specifically by knowledge workers. The revenue operations community appears to be answering this question about how it’s transforming our work, however, faster than researchers can study it. The transformation in RevOps is taking place in two key ways. It’s already become part of the platforms that we’re working in. Within CRM with the launch of Einstein AI and Agent Forest, generative AI is now part of our everyday. This is changing how we interact with our data and how we organize processes. It also means that on the operations side, we’re developing new skill sets to help us increase efficiency. Natural language interaction with our data is becoming a skill set that we all need to develop. And as we do, we’re discovering how to optimize outcomes by balancing structured processes with natural language prompting. Externally, it has become an essential resource for many of us. It helps us increase our own productivity, um, drafting content, emails, even code. There’s no one stuck on first drafts for anything anymore. It helps each of us become more versatile because we’re able to dive into new tasks and bridge knowledge gaps quickly. It also helps reduce time to implementation in a number of different ways. With design, data processing, and other ways that I’m going to show you in our case study shortly.

In the case study that I’ll show you, my team leveraged generative AI for two key functionalities, which I will get into in following slides. Each of these helped us to meet the key challenges that we faced. First, let’s have a look at the challenges. Our challenges were that we were experiencing a need for short turnaround time for our first implementation of Data Cloud. Since we were configuring it for the first time, we needed to become experts quickly. Um, official documentation for on the advanced setup for it was somewhat scattered and required a deep knowledge of Data Cloud that we were still acquiring. Um, of course, our client needed to, um, have advanced configuration. So specifically, we needed to set up custom ingestion API connector, and we needed to trigger a flow based on record creation in the data lake object, and this needed to happen in as close to real time as we could manage it. So those were the main challenges.

And now I’d like to go over a little bit how we met them. So we found that ChatGPT on its own could not provide the sort of output that we needed. We needed to have specialized knowledge. We needed to synthesize knowledge from the various sources that were online, that were on instructional videos on YouTube. They were in a lot of different spots. We found that custom GPTs really helped us to synthesize that knowledge on one hand and also to create some tools that would help us save time on repetitive, highly technical tasks that were becoming time sinks as we were hurrying to our deadline. The custom GPTs, those tools that we created, examples that I will go over today, we created one for each schema structure that was required. In the back end of Data Cloud, when you’re creating a custom ingestion API, you need to create your own schema and upload it. Every time a field type changes or you want to add a field, you need to change the schema and re upload it. The format is extremely specific, and it’s something that just consulting ChatGPT on its own and giving it the field list, it wasn’t giving the consistent output that we needed. Um, also, trying to do this manually is a pretty big time sync, especially since it was our first implementation and we needed to iterate and test quite a bit. One of the things that we did was provide instructions, templates, and examples of good output to guide the GPTs to provide us with what we need. We created custom GPTs as tools in the same way that I suppose a coder might create a little mini app to help with certain very small tasks. I will get into that shortly, but the second type I did want to go over as well, which is in order to get help synthesizing all the knowledge together and efficiently answering our questions, We also needed troubleshooting assistance for if something wasn’t working as expected. It was really difficult to figure out where to find the answers in the documentation online. So I created an explainer/tutor/strategic guide type of custom GPT, um, and I’m sort of calling this the explainer type. It was familiar with all the systems at play. I gave it expertise not only in the systems, but also in the logic of critical thinking and strategic guidance. I incentivized it towards our goal, and the outcome was that it was extremely useful for troubleshooting unexpected issues in light of how Data Cloud was intended to work.

To double click on this a little bit, this is the first type, the tool type of custom GPT that I created. The one that I’m going to show you today is I named it YAML my schema. This is if you’re creating an ingestion API connector, you need to upload the schema in an OpenAPI format, and a YAML file is what we used. In order to make sure that we could actually consistently create a YAML file based on just knowing the field list that we needed and make sure it was all in the structure that we required, we use my custom GPT over here. So I’ll just demo that for you now to have a quick look. So for this prompt, um, I’m going to do a quick show of natural language prompting here. It’s just saying what you’re trying to do. I wanna create an engagement type YAML schema for custom ingestion API, um, and then I’m going to submit my field list. So The custom ingestion API, just for step back just a little bit, it is for one of the ways that Data Cloud can intake data. There are built in connectors that make this a bit easier, but the ingestion API is for when the standard connectors are not sufficient for your needs. That’s what we’re talking about here and creating a schema. So this ended up being highly technical, and this is one way that we made it easier. So this is my list of fields that I know, and then I’m just saying what I want to create, um, what fields should be required, and then I just upload it here and which type of object that we’d like it to be. Now one thing that I did wanna note here is that it is equipped to help me as the user validate its own response. It’s thinking out loud here based on its instruction set, and then it’s giving me a draft schema in a code block. This I can just copy and paste into a text editor, save as a file, and upload to Data Cloud. It also is prompting me here to check and make sure that everything is correct before we go ahead, and it’s giving me the opportunity to make changes. So this was extremely useful to save time, and it’s just a little example of how this can be done. So behind the scenes here, we do have an instruction set that of this particular one, an instruction set that I created just on my own, and then there was another one that I created using ChatGPT because it was enhancing reliability. So I don’t think we’ve got time to demo that today, but it does exist and it is linked in your slides. So it will be available if you wanna play around with it later.

Okay. So back to our slides here. The next type of GPT was the explainer type, and this was the advisor or tutor. Um, now this one sorry. I’m just pausing here to make sure that I am covering everything. So this one, again, particularly useful for help troubleshooting issues across systems. I was able to ask it to become an expert on each system, and it was able to quickly bridge knowledge gaps, explain new sources that I found and how they fit into the overall picture, and generally was a game changer for just figuring out how my problem fit into the big picture of Data Cloud’s overall design. This one produced far more reliable output than vanilla chat GPT because I was able to ground its responses in reliable sources. Within my instruction set, I was able to tell it to essentially refer to key reliable sources and ensure accuracy. There are some tips for writing instruction sets that can increase accuracy here that I will go over at the end. And, um, I just wanna say that this was so useful that internally, we finally ended up referring to the experience of this friendly Data Cloud tutor when he had questions by declaring, “let’s consult the wizard.” So I’ve linked it here for you to check out if you like, um, and the there’s a v two version. That’s the one that’s been improved by Vanilla Chat GPT. So one of the methods I’ll go over at the end here is, um, if you asked Vanilla Chat GPT to act as an expert prompt engineer, um, and provide suggestions for improvement on your prompt, um, it is really good at troubleshooting if for some reason you’re having trouble achieving your goals with your existing prompt. Um, it’s really good at providing great suggestions for improving reliability.

Okay. To summarize the learning outcomes from this case, creating tools to automate time sync tasks allowed us to shorten time to delivery and iterate and test quickly. Creating a custom GPT to bring together that expertise and really synthesize information about multiple systems we were using was a great way to get help troubleshooting unexpected behavior across systems. This will be compared to, for example, just reaching out to a support team for one specific system and it shortens wait time. You don’t have to wait for a help desk support ticket to come back. You’re able to get things done at speed. Then finally, for conducting our first implementation of Datacom itself, using custom GPTs allowed us to ground the responses in reliable sources that we had already found. Instead of going to every individual website on its own, we’re able to feed every individual website and little lists of requirements that we found into our GPT and then ask it questions. Zooming out to the higher level learning outcomes, the takeaway here is that writing good instruction sets is essential. Vanilla GPT could not really help us. Um, now since we were doing this, Vanilla GPT has actually improved a lot. However, it’s still when I’m using custom GPT compared to vanilla, it’s still more useful and saves me more time, um, as that kinda human in the loop. So that’s essential to leverage these tools, and it will become more essential, um, as generative AI is incorporated increasingly into the systems we work with every day. The next lesson here is that it’s important to use generative AI as a collaborator rather than depending on it alone for reliable output. At times, the output would suddenly change for the tools, some small structural detail, for example. It’s important for a human to review and test them, um, but interacting live with the GPT to correct output or ask it questions, questionnaires, responses, um, is still time saving compared to manual updates. And it also provides easy version control because your conversation history is saved. By including user interaction instructions in the instruction center prompt, we also empowered AI to appeal to the user for validation and output. It’s reminding us, hey, check this, make sure it meets these requirements, and just making sure that we’re working together as a co working team to deliver optimal output. Overall, the lesson here was that we get increased reliability beyond the standard GPT when we work on these instruction sets and provide example files as well.

Now let’s talk about how you can do this yourself. Creating a custom GPT is a simple process with six pretty easy steps. First, you’ve got to write an instruction set. Now this is a bit of a balance. It’s a bit of an art. It’s a bit of a science. Um, I do have some steps that we’ll go through shortly on how to think about or what to think about when you’re writing an instruction set. Once you write the instruction set, improve it with ChatGPT. You can try it on its own if you like or if you’re in a hurry. But typically, vanilla ChatGPT acting as an expert prompt engineer is gonna give you some great suggestions. And that’s gonna be based on empirical methodology research that’s being done in machine learning on on prompt engineering. Um, and so that’s going to assist you if you are trying to achieve a goal and you can’t quite get the reliability that you’re looking for. The third step here is to add example files. One of the powerful, uh, powerful reasons to use the custom GPT functionality is that you’re able to provide examples of what a good output looks like, and it’s not counted in your 8,000 token limit for context. So it allows you to provide extra information that can be a set of guidelines. It can be a template or a structure to refer to, um, and this does help improve reliability as well. Um, the important thing though is, of course, to tell it how to use these files when you upload them. Next, you wanna select your capabilities, and I’ve got a screenshot and a slide or two about what those look like. That’s just selecting, do you want it to have DALL E available? Do you want it to use the code analysis? Do you want it to use the web functionality? We’ll show you what that looks like shortly. Once you select the capabilities, test it out and share. I’m not including iteration in here. That’s covered under testing.

The tips I have for writing an instruction set are to write it like an improv briefing. When you’re writing the initial draft of the instruction set, this can be done in a stream of consciousness style very quickly. I’ve got a list of five things to think about, which is coming up when you are writing that should help it be very effective. Second, once you’ve done your stream of consciousness writing, and this needs to be under 8,000 tokens, you can ask chatGPT to improve it. Now, this improvement can be, hey, this is longer than 8,000 tokens, can you fix that? It will. Using the vanilla as your assistant to create the ideal prompt is very useful that way as well. The third step is to add example files, which I’ve gone over previously. Again, these can be templates, guidelines, anything you want your GPD to reference when producing output. Then a couple of other items that are useful is if you ask it to proceed step by step or line by line for complex logic or if it’s reviewing a document, um, that can really help to in two ways. One, it can show you how it’s thinking through everything, how the analysis is proceeding, which allows you that insight to see, hey, is this doing what I want it to? Also, it just enhances reliability. If it’s going line by line, it’s a lot more likely to not miss anything and you can ask it then to check with some self validation. Within your prompt, you can ask it to go back and validate after it’s created the output, um, to ensure certain requirements are met. Once your GPT is ready to try out, you will select the capabilities, test it, and share it with your colleagues, um, or indeed the GPT store if you’re feeling generous. One quick note is just to make sure that there is no proprietary data in your example files. You want to make sure that all personal data, if it’s not in CRM, you don’t have that Salesforce trust layer protecting you from accidentally sharing any private information. So you need to make sure that anything you put in there is not personal or private sensitive data in any way. And so I just wanted to make a side note there.

Okay. So the five step instruction set kind of drafting guide here, um, this is based on my experience creating over 50 custom GPTs. My colleagues have had great success following this as a rubric. Let’s go through them real quick.

  1. First, set the scene with detailed context. Again, imagine you’re briefing someone on an improv scene, you need to tell them who they are, where they are, what they’re doing, what their goals are. I’ve listed some questions here to just ask yourself or about the GPT, or I’ve worded them as if the GPT is asking you. So who am I? What do I know? Um, a couple of things to point out here is you can include what systems is it familiar with, what knowledge does it have that will equip it equip it for its task. So, For example, maybe it’s great at critical thinking. Maybe it’s an excellent analyst of very specific documents. If you gift it with that equipment to be able to apply to your problem set, it will become that expert that you need. If you want to also incentivize it, if you tell it what it wants to achieve as an assistant to its user, that can also be very powerful. So it kind of helps to create the scene of what is the interaction what’s the audience interaction of the scene gonna look like for your improv participant here. Um, so how will it interact, Questions it should ask, should it go in an interview style? Should it be step by step? Um, basically, you need to tell it what it’s really good at and and who who it is to them. You know, are you a helpful assistant? If you’ve done some implementations of Einstein AI, then you might see some of this used in some of the prompt templates where they’ll say, you are this kind of assistant doing this. You want to give it a role and a purpose, and that really helps. If you think about it, ChattGPT is trained on so much data that really the ability to focus its this vision on the subset that you need creates really powerful output.

  2. Next up, give it an example of what good looks like. This is again related to that example data, and then tell it how and when to refer to it. Templates, structures, especially if you’re creating those tools, that’s where these structures become really, really important. If you are creating more of an adviser or a strategic analyst, then, um, you could make it for example, if you’re creating a business adviser, you could you could make it great at visualizations for business use um, and have it make suggestions to you. And you could provide some examples that kind of prime it to know what you think good looks like for certain kinds of visuals.

  3. Finally, tell it what to do when things go wrong. This is very important. Um, for some of the custom GPTs I’ve created, I have found that treating it like kind of a fresh intern is can be pretty powerful. Uh, one of the a colleague of mine who is a a prompt engineer, um, had advised me to say, you know, don’t give up. Do your best, and try an alternative method if needed as a way to prevent my custom GPT from glitching because I was trying to get a documentation GPT set up, and in order to do that, it was requiring lots of tokens and it was just getting stuck. So when it got stuck with that one little incentive in the same way that you would treat a real person, um, it would simply look for an alternative method. It would keep going until it found me, um, the output that I needed and was able to deliver that and make suggestions of workarounds. So, um, another important factor here is letting it know when it should give up or ask for human input. I’ve already talked about giving up, but asking for human input and approval is also very helpful. You can have your user ask your GPT to generate, say, a downloadable Word document, for example, with certain content. Um, what you might wanna do is have it have your human, um, provide approval before generating that final document. Now, this has changed a little bit with Canvas coming out because it’s a bit easier to work on a document that way in tandem with your GPT. I don’t believe it’s available for custom GPTs at the moment. If you’re using a custom GPT and and you want it to generate a downloadable Word document, um, you want to make sure that your user has an opportunity to approve it first. That might be part of the process that you provide for your GPT. Again, you can just describe this process right in your instruction set.

For the final setup, you will need to select the capabilities of your GPT. A couple of tips here for RevOps use. Unless you are manipulating images specifically, deselect DALL E and select the other two. The code interpreter and data analyst is going to be a necessary and this is a screenshot from a creation of the custom GPT. Um, deselects Dolly. Rumor has it that the Dolly selection can add to the risk of hallucination. So for now, for me, I don’t, uh, include it unless for some reason I need something image specific, and I will continue that until I hear otherwise. Um, but you definitely, for rev ops, will need the code interpreter and data analysis for most use cases. That’s the uh, no, final setup capabilities.

Um, and then I have some final thoughts to review with today. And I know it says final thoughts, but it’s not quite the final slide. Some reminders of best practice, some tips based on the lessons learned from our case study. Um, first, remember that generative AI is co intelligence. So it does not replace domain expertise or remove the need for us to participate in outputs. Um, we’re getting the best results, and early research suggests that we’re getting the best results for productivity and reliability when we work with it as a partner, as a coworker, as a first draft assistant, and as a way to synthesize knowledge and save time on repetitive technical tasks. And this is another friendly reminder to keep privacy in mind. Um, when you’re working externally to CRM, you must keep data privacy in mind, whereas the Salesforce trust layer will take care of this for you. You when working externally to CRM, you have to make sure that private and sensitive data is not submitted to the generative AI. Um, and this is because within the CRM, Salesforce actually has an agreement with OpenAI to, um, not save any of the data, and it’s also got that trust layer that masks personal information and sensitive information that you can configure custom or you can use what comes out of the box. Despite this limitation of not being able to just throw your data in there, external use of generative AI is very helpful for increasing the productivity of individual workers and teams. Personally, from my case study, I hope that today I have shown you how it can be useful. I think that I have sped through a little bit of my talk so that I might have some time for questions. But as a summary, AI is quickly changing how we work. Um, writing good instructions is quickly becoming an essential skill set for revenue operations, both for within CRM and externally. Um, we’ve looked at the increasing importance of that. Um, and as as it becomes more and more embedded in our processes, we’re gonna be learning as rev ops professionals how best to balance that need for structure and flexibility. Um, so remember how I said that writing a good instruction set is like writing an improv briefing? One reason why I consider that to be very true is that with an improv briefing, you need that mix of structure and flexibility. You need to tell it as much context as you can and then say go, and then you’re relying on its creative genius to produce something really special. Um, and that’s kind of the magic of generative AI and ChattGPT. So I hope your takeaways from the case study today are that the challenges from the case are common to revenue operations and widely experienced. We’re always under deadline. We’re always learning new systems, and technology is changing constantly. So to be able to get a more versatile skill set quickly, um, and be able to, you know, stop before having to reach out to a colleague who or or a former colleague to ask for help with something that they might have experience in, you can actually think who do I need, who would be useful to talk to right now, and then you can make them with ChatGPT and custom GPTs. I have provided some references here in case you’d like to look at some of the studies that I was referencing at the beginning of this talk. This was regarding the impact on productivity for the average worker. Again, research is really in its early stages. So a lot of these, if you dig really into the the details, um, there there’s some bold assumptions that some of them make in their experiment design or their choices. But what we’re experiencing every day in RevOps is that it is making us more productive, and it can really help. It’s not, you know, suddenly creating perfect output all the time, but there is an opportunity here to really change the amount of value, really increase the amount of value that we’re able to offer to our organizations. Finally, I’ve added my email address here. If you’d like to try out more of the custom GPTs that I’ve created, you can reach out by email and I’ll be very happy to send you a spreadsheet of my favorite custom GPTs for RevOps. I believe I’ve got about ten minutes if we are able to have questions. Yeah. Are we? Yeah. Okay.

Speaker 0: Can you can you all hear me? Uh, thanks, uh, Lara for for an amazing session. Um, it does look like we have a few, uh, minutes there for q and a. Um, so if anyone has any, drop it off drop it in the q q and a tab. Um, but, like, watching your session, uh, I do have some questions that came to mind. Uh, I wanted to know, like so you’ve built this custom GPT now. What are kind of ideas on how to, like, maintain it after? So what’s typically needed from a maintenance perspective after you’ve implemented this custom chat g p t or custom g p t?

Speaker 1: Well, for me personally, I I work in a consulting agency for RevOps. When I’m maintaining a GPT, I really have to go and alter the instruction set depending on my use case. For example, I’ve got one that helps me with, uh, SOCL queries, and I just have to put in my logic that I need, but I can also adapt it, um, for SQL as well. So there are ways that I can just go in, give it expertise in an additional system, and use it for that additional use case. Um, from a maintenance perspective, one thing to consider is the information that you’re feeding it. So, for example, if anything has changed with a recent release and you’re pointing to an old website, um, don’t give that to your GPT. It’s probably not going to be very helpful. It might not prevent it from giving helpful answers, but it’s certainly not helping a point in the right direction. I’d say any kind of link that you provide your GPT, um, you definitely want to update that. Also, the example files, if structures are changing, um, sort of any kind of formatting requirements change, you need to change your example files that you give it. Your GPT is as helpful as you make it. So it’s really easy to customize, um, but I found drawing on kind of a humanity skill set to make those changes is the surprising part of what works best here. Um, I find a lot of my more the more quantitative focused, um, colleagues that I have, um, find it a bit overwhelming to suddenly create an improv briefing kind of thing out of nowhere. Um, for me, it doesn’t take very long. But again, you can leverage ChatGPT because no one has to do the first draft of anything anymore. You can just tell ChatGPT what it is you’re trying to do and that you’re trying to create an instruction set, and it will help you there as well. I’m more than an answer to your specific question, but I hope I did answer the questions.

Speaker 0: Yeah, for sure. No, amazing. Thanks for the tips. It’s definitely helpful and something everyone can take with them too as they build out custom GPTs. Um, I do have another question. Um, it’ll be my last one. But, uh, I wanted to see, like, how you you measure the effectiveness of so you had mentioned you compare it. You put it into a a chat GPT to compare effectiveness of your custom GPTs. Is that correct?

Speaker 1: Yes. To compare effectiveness, for example, for the the YAML one, there is the YAML my schema here and then I’ve got YAML my schema v2, and that’s one that was improved by ChatGPT’s input. The difference that I found there was simply I was having a very specific problem in that in this particular output, chat GPT was just changing the order or rather my custom GPT was changing the order of some of these fields and randomly putting them or maybe not randomly, but it would sometimes show up on the bottom. And it’s it still was useful to me because I could see, oh, that’s on the bottom. I just need to move it. But, of course, it’s going to be much more useful if it is very reliable. So working with ChatGPT as a prompt engineer to say, hey, here’s my problem. And just really the key here is fully defining the problem, um, as explicitly as you can and then getting help. Again, just asking yourself, like, who would be able to help me right now? And then create them and then get that input that you need. So the comparison here was with this v two, um, and I can give you a very short peek into hang on. Can I I can give you a short sorry?

Speaker 0: Love that. I love sneak peeks. Yeah.

Speaker 1: Okay. I’m just trying to get the for some reason, it’s k. I’m going to take this off screen for a moment. Here we go. And I will show you the I’m not sure if we’re at time. So let me know if I’m, uh, taking too long here, and I will wrap up. Okay. Okay. Excellent. So if I go to my GPTs and, actually, I may not be able to give you the sneak peek because for some reason, I’m not as it’s not signing me in properly. One more moment. No. For some reason, I’m not sign it’s not signing properly on this particular window, so I’m not gonna be able to show you today. But if anybody would like a sneak peek, send me an email. I can send you a Loom and show you what I mean anytime.

Speaker 0: Awesome. Thank you so much. Um, so, yeah, that concludes today’s sessions unless anyone else wants to drop a question in the q and a tab. Um, but for now, yeah, we can, uh, thanks again for joining us, and a special shout out to our sponsors for their support. And thank you, Laura, for presenting this awesome, uh, information about custom GPTs. And, also, uh, I’d like to say without MarDreamin, all this the sponsors so thanks for the sponsors for the support. And without them, MarDreamin would not be possible. So, um, all attendees, please head over to the agenda to check out what other sessions you’d like to join next, and we’ll see you all soon. Thank you. Thanks, everyone.