855 Laura Bell Main: AI and Cybersecurity

A lot of the software we build now is built from other components, libraries and frameworks that are third party, they’re provided by other people. And (fade music softly) if you are able to influence those and put something malicious into a library that lots and lots and lots of people build their software around, then we have big cascading issues through our whole ecosystem. And that’s what we call supply chain security and on the AI side, we’re also seeing a whole novel bunch of attacks, which we call a prompt injection, because prompts are freeform, you can ask them literally anything you like, off you go, be as creative as you like. But if you structure a prompt in a certain way, you can start to encourage the AI to share information it wasn’t intended to (start MUSIC SOFTLY – from here) share with you. We’ve seen examples of people being able to get passwords out of LLMs, sensitive information.

Artificial Intelligence is changing everything about our lives and our businesses at such a rapid pace it can be hard to keep up with all the innovations let alone how to protect our identities and our businesses from attack. Laura Bell Main is a cybersecurity expert and CEO of SafeStack, helping companies large and small worldwide build application security skills, practices, and culture across entire engineering teams. Today we talk about how AI is changing everything in the world of cybersecurity and what you need to know to stay safe.

MELINDA

Hi, I’m Melinda Wittstock and welcome to Wings of Inspired Business, where we share the inspiring entrepreneurial journeys, epiphanies, and practical advice from successful female founders … so you have everything you need at your fingertips to build the business and life of your dreams. I’m a 5-time serial entrepreneur and the CEO and founder of Podopolo, the AI-powered interactive app revolutionizing podcast discovery and discussion and making podcasting profitable for creators. I’d like to invite you to take a minute, download Podopolo from either app store, listen to the rest of this episode there, create and share your favorite moment with our viral clip sharing tool across social media, by text, or any messaging app, and join the conversation with your questions, perspectives, experiences, and advice … Because together we’re stronger, and we all soar higher when we fly together.

Today we meet an inspiring entrepreneur and cybersecurity expert who helps technology businesses bring Application Security and Secure Development practices to their engineering.

Laura Bell Main is the CEO and co-founder of SafeStack, an online education platform offering flexible, high-quality, and people-focused secure development training for fast-moving companies, focusing on building application security skills, practices, and culture across the entire engineering team. Today we talk about the challenges presented by the speed and diversity of AI development, the perils of prompt injecture, and lack of transparency in the large language models that power AI.

Laura will be here in a moment, and first,

Imagine … a tool at your fingertips to help you easily understand what motivates and drives the behavior of your team members… Now you can improve your leadership skills, reduce stress and boost productivity all around, invigorating a successful workplace culture at your company… with Dignify. It’s an interactive roadmap on how best to talk with your team members, so they feel seen, heard, and respected for who they Are, not just what they Do. On Dignify, you and your team will learn to treat with each other with DIGNITY. Go to Dignify.com and use the coupon NECTAR-L945 to take the survey for free. That’s Dignify.com with coupon NECTAR-L945. See how Dignify could work for your business too… set up a free consult call with Wings guest Michele Molitor at https://bit.ly/dignifyatwings. That’s bit.ly/dignifyatwings. Details in the show notes.

Every hour of every day there is an exciting new advance in AI and just as fast there is a new scare – a hallucination, a deep fake, new AI inspired malware, or just this week a security breach in Open AI’s ChatGPT. FYI, competitor Google’s DeepMind figured out they could get user information from ChatGPT simply by prompting it to repeat the same word over and over again. Yikes.

With innovation moving at lightning speed and so many companies building on top of third-party large language models like ChatGPT and using many other third-party libraries, there are new dangers like what my guest today calls “supply chain insecurity”. Laura Bell Main is a cybersecurity expert and the CEO and co-founder of SafeStack, where she is on a mission to train some 30 million software developers to incorporate cybersecurity into their applications from day one. She also hosts the podcast Build Amazing Things Securely.

Today we talk about the new security challenges for businesses large and small and how best to navigate them, plus how technology-based companies can ensure their engineers are optimizing best practices as they develop and maintain software products. Laura also shares her journey as an entrepreneur, plus how she productized what was originally a service business.

Let’s put on our wings with the inspiring Laura Bell Main and be sure to download the podcast app Podopolo so we can keep the conversation going after the episode.

Melinda Wittstock:

Laura, welcome to Wings.

Laura Bell Main:

Thank you so much for having me, Melinda.

Melinda Wittstock:

Cybersecurity has always been a big issue, but it’s becoming an even bigger issue with AI and everything. I want to just jump in because AI is on everyone’s mind. How’s that changing your business?

Laura Bell Main:

It’s very interesting. So cybersecurity is as old as the hills as a problem. It’s a human problem. As long as there’s been humans, humans have wanted to take things that weren’t theirs or improve their life and they would apply whatever technology at the time, AI is just the latest technology. And I think what is different this time is while there’s a lot of unique challenges with AI that are probably a little bit big to go into in a podcast, the thing that’s quite interesting is the speed and the diversity of systems that are all popping up at the same time.

So normally when we see a technological change, we see a step change, bit after bit it grow. But this time, we’ve just had the DevDays from OpenAI, and even within 24, 48 hours, we are seeing literally hundreds of new systems pop up. And it’s very hard for us as people, as company owners, as entrepreneurs or whoever you are in the world, it’s really hard to create a little risk model and understand the system and go, “Well, this is how I can safely do it,” when there are so many popping up at the same time. And when so many of them are using components that we’re not able to look at the steps that they’re taking and evaluate them.

So that’s the big difference with AI systems, is that evolution that those algorithms are taking with our data. They’re taking an input and they’re turning it into other decisions. And we can’t easily look at how those decisions are made at the moment and that makes it very difficult for us in any context to decide is this a good thing or a bad thing? Is this safe or not? So I think there’s going to be a lot of very fast-paced exploration and learning in the coming months and years.

Melinda Wittstock:

Exactly. You need an AI to be able to learn what’s going on in a large language model or whatever. I mean, just the pace of innovation. It strikes me that you’re in the Whack-A-Mole business, right?

Laura Bell Main:

Absolutely.

Melinda Wittstock:

And when there’s no transparency and you have this velocity of innovation, gosh, it’s overwhelming.

Laura Bell Main:

It is. And I think if you looked at Europe at the moment, there’s a really interesting legal push going on that they’re actually putting forward law that would require an AI vendor to provide an explanation with every decision it makes. So that if you wanted to, you can see how it got to that decision. And that’s going to be particularly important in AI-based systems that have direct impact on people. So let’s say a finance system is deciding, can I get credit or not? That kind of thing.

Melinda Wittstock:

Oh, I mean, that is a huge issue because if all the models have been trained on our previous history, for instance, you have all these biases and take the internet as a whole. I remember a long time ago, Microsoft in our world, a long time ago, a couple of years ago, released a bot on Twitter, it took three days for it to become a Nazi.

Laura Bell Main:

I do remember, it was both very funny in some ways and also deeply alarming.

Melinda Wittstock:

Deeply alarming because that was a couple of years ago and you think of the innovation since then. So the LLM is only as good as what it’s been trained on and how it’s been trained and the bias of the humans who’ve trained it and then the decisions that it starts to make. And without that transparency, that’s really tricky. And on the other hand, you have all the developers of this thing, oh my God, how could we possibly document each? Especially when it jumps the shark and it starts making its own decisions.

Laura Bell Main:

Yeah, it’s going to be a very exciting and challenging time. And for me, it’s really looking at those systems that are really people focused. So anything with immigration, anything with transportation, finance. And I think we’re going to learn a lot in the next few years, I think the savvy listeners, if you’re looking for something to really follow, I would follow very closely how we balance protecting people and data with this very aggressive and rapid technology change.

Melinda Wittstock:

Right. Because it’s so rapid, it’s almost like you don’t even know what the unintended consequences are going to be. You just even look at the evolution of social media. I think when social media started, everyone thought it was a beautiful thing, it’s going to connect us all, it’s going to be so good for humanity and whoops.

Laura Bell Main:

Yeah. I mean, I’m old enough to remember the very start of Facebook and it was neat. I was keeping in touch with people and it was just happiness in my pocket. And now I have a supercomputer in my pocket that makes me cry. So yeah, social media is definitely its own thing and now with AI on top of that, my goodness, now it is a very rapid cesspool. So who knows what’s going to happen next.

Melinda Wittstock:

Well, it’s interesting too because to what degree is the possibility of malware being inserted either intentionally or into one of these models or AI systems?

Laura Bell Main:

I think the reality is it is probably not malware we’ll be focusing on. So there’s a lot of issues at the moment in software in general, AI systems included, but it’s a bit broader than that, where a lot of the software we build now is built from other components, libraries and frameworks that are third party, they’re provided by other people. And if you are able to influence those and put something malicious into a library that lots and lots and lots of people build their software around, then we have big cascading issues through our whole ecosystem. And that’s what we call supply chain security and that’s a big issue around the world. On the AI side, we’re also seeing a whole novel bunch of attacks, which we call a prompt injection, which actually you can have a play around with this if you like, even if you’re not a security nerd yourself.

So it’s the idea that because prompts are freeform, you can ask them literally anything you like, off you go, be as creative as you like. But if you structure a prompt in a certain way, you can start to encourage the AI to share information it wasn’t intended to share with you. It’s like when you are asking questions to family members and they’re being evasive, so you talk around it and ask things that would imply the answer. It’s the same thing. So I think we’ve got this whole emerging category of new attacks that we’re only just starting to see the results of. We’ve seen examples of people being able to get passwords out of LLMs, sensitive information. And there was a particularly interesting example this week of being able to guess, it’s sort of guessing but it’s sort of not, the full details of somebody’s, their personal information using LLMs. So you’re not really looking up that information, you’re just able to infer it from all of the data points that the models can see.

Melinda Wittstock:

Oh gosh. I mean, that’s scary. The implications of that for democracy or a bad actor using that, to influence an election or to-

Laura Bell Main:

Absolutely.

Melinda Wittstock:

… go after, I don’t know, political enemies or in a business context, business competitors or, gosh, oh, that’s terrifying.

Laura Bell Main:

And it starts younger and younger. I have an 11-year-old and she’s the daughter of two security people, so she is what you’d think she would be. But because these tools are so widespread, you can use them in any context. And we are really under prepared for understanding the impact and the risks in every demographic that’s using, whether it’s a large issue like a democracy or a smaller one like your kids’ education results or how they’re operating in a school environment.

Melinda Wittstock:

So where we stand right now, we had President Biden in the United States issue this long executive order about all the things and agencies and all this sort of stuff. We have the work in Europe, we have a whole bunch of stuff around military, countries trying to get together and figure out new rules around war and security and such regarding AI. We have all these things, but how on Earth can legislate… Sorry. How on earth can governments, government agencies, legislatures, whatever, even possibly keep up with this just because the lack of knowledge and the pace with which it is changing? How do you see that and the efforts just to try and figure out what’s the right way to regulate this stuff?

Laura Bell Main:

It’s so hard and it’s such a divisive area as well. Because out in industry, if a big corporation, I’m not going to name one, but one of the big tech companies that you can think of, if they decide they want to go and do X, Y, and Z, they don’t really actually have to follow any rules. They can just go and build the thing and see what happens. And we see that all the time. A lot of these new early stage LLMs came from that kind of space. But the governments, they’ve got a catch-22. They would benefit from using some of these technologies, but they’re always so very late in the party. Now, if a government wants to use AI technologies, they’re heavily regulated. There’s lots of rules that saying you should do this, that and the other, and those rules will grow in complexity in the next two years.

But industry isn’t held that way. What we’re seeing time and time again in both privacy and in security is where government is holding itself to a very high standard around the world and then they try and enforce it out in the wider world and it’s just not working. The number of companies that are actually held to account for technical blunders or security issues or breaches is astonishingly low. And so at the moment, sometimes, maybe it’s a bit cynical for this time in the morning for me, but I kind of think, are we wasting our effort in trying to retroactively come to these organizations? How can we shift how we do this to be closer to the start of them? But that’s a big problem and I don’t think I’m the person to solve that.

Melinda Wittstock:

It’s a very difficult issue too because you have an unlevel playing field where you do have the Googles and the Facebooks and the Elon Musk’s and whatnot of the world who have so much money and here in the United States, spend so much money on lobbying and the cat’s already out of the bag. And then you’re right, how can they even be evaluated because the damage is already done and the influence is already bought? So-

Laura Bell Main:

Yeah, it’s very challenging indeed.

Melinda Wittstock:

It is. And so talk to me a little bit about your clients and the issues that they have. What are their biggest worries and some of the main things that you’re solving for them right now?

Laura Bell Main:

Sure. So we’re on a bit of a mission. So there are 30 million people in the world right now who build software. So software developers, testers. It’s a huge community and I don’t think most people are aware of how many people are in that space now. But what’s happening is we built all of this incredible software from finance to health to transportation, you name it, there is technology being built to make the world better in some way or just make it a bit more fun. And we try and do the security for these things afterwards, after the software has been built.

So at SafeStack, we work with development teams in about 89 countries now startlingly, ranging from just two people and a big dream up to airlines and banks. And we’re trying to get everybody involved in software development to do just one hour of security every two weeks, which in software we’d call a sprint. And so if you look at that as a net effect, if you have 30 million people and each of them do just one hour of security through what they’re doing from thinking about the design a bit more coherently or conducting a threat assessment or any of these skills we can teach them, then instead of us being responsive at the end, we can actually grow an entire generation of software that is secure by design.

Melinda Wittstock:

Right. So it’s a more proactive approach getting in on the ground floor. And so yeah, it’s not trying to fix things, it’s avoiding things right from the get-go.

Laura Bell Main:

Absolutely. It’s really hard to fix security once you’ve built a system and it’s out in the world. It’s so incredibly difficult and very expensive. And our developer teams, to an extent, we undervalue, we underestimate what they’re capable of. They already manage all of these aspects of software. They make it scalable, usable, accessible, performant. They do all of this every day just as part of being a good engineer. So what we’re trying to do is take some of those skills and just tweak them a little bit so that by the time our security teams need to get involved, then that foundation is really well covered and our specialists, the people who we want looking for those gnarly edge cases or those really sophisticated attacks can really focus on the hard problems.

Melinda Wittstock:

That makes a lot of sense. So how hard is it to persuade development teams, especially at startups who have a lot of this move fast, break things, Mark Zuckerberg attitude, where you’re under a lot of pressure to get a product to market quickly, to iterate fast, to do all these things, and that’s so much a part of the culture, to actually take the time on something that doesn’t feel, “Sexy.” Do you know what I mean?

Laura Bell Main:

Oh, I totally get it. And security is not a sexy problem at all.

Melinda Wittstock:

Yeah.

Laura Bell Main:

And it’s tricky and it would be the same in any area where we are massively time poor at the moment. We’re all trying to get so much done every day, every week and adding more complexity to that, there’s going to be friction, there’s going to be pushback. And historically we’ve done this very badly. We did the equivalent of walking up to a dev team and saying, “Hey, your baby is ugly and you should feel bad, please do more things.” And we did that for 15 years as an industry and it did not work. Unsurprisingly. What we do, the team of SafeStack, we’re all hybrids, we’re all either former or current software engineers as well as being security people. And so when we’re coming in and talking with a team, it’s not about, “Hey, look, this line is wrong. You need to do this correction.”

It’s a cultural change. It’s about understanding the constraints and the decisions that are being made and how security can be put in there and what the impacts would be. Now there’s a few challenges and there always will be in technical communities, we have a lot of folks who go, “Well, I’ve been doing this 10 years, I know everything. You have nothing left to teach me.” And that’s cool. In those cases, we can encourage those people to learn so that they can coach, so that they can bring other people around them.

For some people, they need to learn by having visibility of what’s going on in their system. So learning the theoretical level isn’t going to help them, they need to see, in my code right now, what are the security challenges? What are the issues? And we can help them do that. And the nice thing about how we do this and what we’ve been doing is it doesn’t really matter your seniority or your role or your skillset, there is a little bit that each of us can do. And by making it respect the time that they have to spend in their life, which is not much, and making it very practical, we’re able to go past that, “Hey, we’re going to make your life more difficult.” And then it seems practical, it seems easy. “Oh, just do this small thing and here we’ll make it easy for you.”

Melinda Wittstock:

Right. And so how does it actually work in your company then? Do you embed people with the team or how does that work?

Laura Bell Main:

No, we’re a product. So once upon a time, a long time ago, we were a services organization, so I was like Mary Poppins going into chaotic startups and helping them figure out doing software security. But now we’re a product, so you can sign up, you can bring your entire team on board and we have a whole range of things that work together to make this really practical and flexible for your team. So we have courses and qualifications as you would expect, but the modules are small between two and five minutes long. Because everyone’s time poor and you need to do this at a pace that suits you.

We have labs for exploring and playing with things, everything from a vulnerable crypto exchange that we built, right the way through to CICD pipelines that you can play around with. We have seminars once a month, so you can come along and deep dive with some other people from other organizations and go, “Okay, cool. I’m solving this problem. How have you done it?” And then we have a community where we’ve got a mechanism that you can ask anonymous questions. So if you are building system three and it’s going to do this shiny thing, it’s not really safe for you to go on the internet and go, “Hey, I’ve got this vulnerability and I’m stuck.” So we created a mechanism to get help from other people all around the world who are solving the same challenges.

Melinda Wittstock:

Oh, that’s amazing. You mentioned you’re working in 89 countries, amazing. Congratulations on that. And you’ve got large companies, small companies or whatever. I mean, how many software developers are now involved in this?

Laura Bell Main:

18,000.

Melinda Wittstock:

18,000. That’s amazing.

Laura Bell Main:

Yeah, it’s not bad. We’re two and a half years in, so baby steps, there’s still a long way to go to 30 million, but we’re very proud of our progress.

Melinda Wittstock:

So there are a lot of implications here because obviously you mentioned before there’s a huge cost of getting it wrong because to go back and fix these things is almost impossible, very difficult. But to get it right from the beginning, ultimately you’re going to save your company a lot of money. People are in the here and now though and so what are some of the impacts that you’ve actually seen in working with these now 18,000 developers and the companies you have? Is there a demonstrable result that that company can be proud of? How does that-

Laura Bell Main:

I think security is a really funny one because we very frequently get asked how people measure and see the change. And sometimes it’s big stuff. So we will get emails from companies and say, “Hey, thanks to this.” A team that we didn’t even know were building some software, so our security person will reach out to us and talk to us and they’ll say, “Hey, we didn’t even know this team was building a thing, but they came to us with a threat model and we didn’t even know they knew how to do threat models. So now we’re engaged with them.”

And for me, that’s an absolute joyous moment because a software team that has independently started thinking about these things and known when to go and ask for extra help, that’s a huge step forward. We’ve even had folks who, the software testing community are often left out of training and things like that, but we have intentional content for them. And we’ve had very confused software leaders get in touch and say, “Hey, I thought you should know. My testing team have just called me in and presented for 20 minutes about security and how we’re doing it wrong. And apparently they took your courses and now they’ve got a work program in place to do security testing and we thought you should know.”

Melinda Wittstock:

Oh, wow.

Laura Bell Main:

So it’s really good. It’s inspiring.

Melinda Wittstock:

And so is this mostly in the QA area of it? I mean, I suppose that’s an obvious case, but also just for the actual developers themselves, is it-

Laura Bell Main:

No, it’s whole software development life cycle. So we have stuff for product owners and product managers, for when they’re making their decisions and prioritization. We have stuff for UX designers and how their superpowers can be used for good or evil. So to create good behaviors in the application or unsafe circumstances, right the way through to the deployment and then ongoing maintenance of the system. So all the way through that software life, we’ve got things that folks can then bring into their workflows in very manageable ways.

Melinda Wittstock:

So another question about AI that occurs to me is increasingly people are using AI to write code and also to QA the code. So that’s another layer. And so what if the AI that you’re using is not great?

Laura Bell Main:

Yeah, I think at some point we’ll all just give up and go and farm alpacas somewhere. But for now, I think we’re at a playful and creative stage, we’re at peak excitement. I think the best thing we can do is be curious and playful. So as these things emerge, understand them, don’t avoid them. Go get stuck in and see what they can do for you. But do it in a way that’s conscious and can be slightly critical. So look for the weirdness, look for assumptions, look for questions that you don’t know the answers to because those are going to be the bits that guide what we need to do next.

Melinda Wittstock:

Yeah, there’s a real need for outside the box thinking and understanding the game board in a way that you’re playing on. And often we can get pretty blinkered and you’re just doing this one thing or these five things or whatever, but not really seeing necessarily or connecting the dots in a lot of ways. So I like that you set a cultural shift because it does really require that. And so in your business, what are some of the biggest challenges? I mean, when you’re persuading companies to work with you, how does that sale go? Do you meet with resistance or is it mostly like, “Oh my God, we need you so badly. When can you start?” I suppose it runs the gamut.

Laura Bell Main:

It does run the gamut. I would say what we find the most humbling and I think the biggest realization we’ve had is you can see a giant brand name, a company that has been around a long time, they’re everywhere, huge companies. But in reality, the problems that they face in getting development teams to do security are exactly the same as startups. They just have additional organizational complexity. So we see that the biggest barriers to embracing this are not the will of the development teams. They often actually really want to do a good job, they want to build high quality software and security is part of that. But often what they don’t have is a structure. For many years, the security team has been very separate from development and it is in the majority of organizations still. And culturally, those two areas don’t talk very well.

So our dev teams lack what we would call in application security program, an easy to manage, easy to follow pathway of how we do this without getting in the way. So what we’re actually doing in the company right now is we are working to go, “Right. We have this incredible education platform that we know can change behaviors and we can know it can be useful to all these teams, but the biggest challenge is that the teams know that they need it, but they don’t then know how to fit it into their world.” So we’re trying to then look at how do we support that? How do we support an AppSec program in an organization that doesn’t have one? And challenging the assumption that all of the big name companies already have one because they really don’t.

Melinda Wittstock:

Yeah, 100%. And it’s something that you don’t often think about or just even, say if you’re a startup or emerging growth company. I mean, is this in your budget? That kind of thing. Chances are, probably not.

Laura Bell Main:

Yeah. We intentionally built the company knowing that, so we do a few things that actually help younger companies in particular come and get started. So firstly, we have parity pricing. So wherever you are in the world, your dollar value is recognized equitably. Because if you’re in the US, that’s one thing, but currencies being the way they are right now, it can be really hard if you’re elsewhere in the world. So we’ve got parity pricing in place. We also have this free program. So if you are listening, you can go check this out, we’re super low touch on sales, we’re not going to harass you, I promise. But you can get essentials training for up to 50 people free of charge. And that takes away that barrier of going, “I can’t get started, I don’t have budget.” That is no longer an excuse. You can come and train 50 people free of charge, no strings, no credit cards, no tricks.

And that means that they can come and engage. And what we find is a company might come and take the free plan for six months and then because they’ve gained confidence, we can have a chat and go, “Hey, cool, what does this look like next? How do we go further?” And sometimes they’re ready and sometimes we’re not. And we have quite an empathetic approach to sales and to getting people onto our platform and into our environment because they have to be ready, there’s no point forcing somebody into a space that they’re not quite at the stage they can make the most of.

Melinda Wittstock:

That’s amazing that you do it that way. And so Laura, well we have still a little bit of time. I’m interested in your entrepreneurial journey. Because I mean, you went from being obviously a software developer yourself and an expert in all of this to launching a company, co-founding it and building it. What’s that journey been like? I mean, what was the impetus, first of all, to start the company and what have been some of the surprising things on your journey along the way?

Laura Bell Main:

I would love to have one of those stories, Melinda, where you go to the right business school and you had a business plan and you strategically thought about it and calmly did it, but that’s really not what happened.

Melinda Wittstock:

It rarely is actually.

Laura Bell Main:

Yeah. I sometimes wish for it, but I don’t think it happens. So in 2014, I just had my first child and I was the AppSec leader for a financial technology company that is now owned by a very major credit card player. And it was a great job, every element of it was fine, I guess it paid well, it had all of the good responsibilities, but it wasn’t my thing. And I was getting really frustrated at the same time because AppSec at that point was very slow. We were there to slow down development and make sure risk was taken care of. And I don’t know whether it was just becoming a new mom or whether I was just naturally very stubborn and opinionated, but I was like, “I’m sure we can do this differently.”

And so in a moment of less than good planning, I quit my job. I had about $300 in the bank account and I decided I would start a little tiny company and I wanted to prove that you could do security in a software lifecycle without getting in the way. And so off I went, I literally wheeled a little chair down the main street of Auckland in New Zealand to the dodgiest, cheapest shared office I could find and started making phone calls. And that led to a couple of books. So Agile Application Security published by O’Reilly and speaking around the world.

And then eventually, having been a consultant and a trainer in that space up until 2020, myself and my co-founder Erica, we decided that we could never scale a consultancy the way that you could scale a software company. And wouldn’t it be amazing if we could do what we did for companies in person, we could do that for them wherever they were in the world, at a price point that everyone could afford and that scaled. And so we did. We took the COVID advantage of that lockdown period where we were staying at home and had a bit more time and we built a product company and here we are now.

Melinda Wittstock:

Fantastic. Well, congratulations, Laura, on all your success. And I could talk to you for a lot longer. I’m going to want to have you come back on and talk about this again because it’s just changing so quickly. And my company, Podopolo, which is an AI powered podcast platform, our brand studio division is actually launching a podcast about AI. So when we do that, you’ll have to come on.

Laura Bell Main:

Absolutely. I’d love that.

Melinda Wittstock:

Be one of our experts there. And of course, Podopolo the company, we’ll check you guys out too because this is something that’s really top of mind, I think, for all companies, including ours, because we’re busy making software.

Laura Bell Main:

Amazing.

Melinda Wittstock:

So thank you so very much. I just want to make sure that everybody knows how to find you and work with you. What’s the best way?

Laura Bell Main:

Awesome. So if you want to connect with me and I love connecting to new folks all around the world, please find me on LinkedIn and send me a connection request there. For the free training, I highly encourage folks to go to safestack.io and just click get started and you can go into the free plan. It has no time limits, no credit cards, so you can just get started today. And finally, there’s one little challenge for you. So if you think your company should be doing a little bit of application security, but you want to make it super manageable, we run a free program, it’s called One Hour AppSec, and we give you all the materials, which are the videos and little guides and templates for your team to do one hour of application security every sprint. And it’s free of charge. So if you go to onehourappsec.com, you can sign up and you can join in and just do that little tiny amount of security in whatever you’re building.

Melinda Wittstock:

That’s incredibly generous. So we’ll make sure that we mention that in all the show notes. If you’re driving or jogging or whatever you’re doing as you’re listening to this right now, you can find this in the show notes for the episode, wherever you get your podcasts and on Podopolo as well. And also you have a podcast too, Laura, so tell me a little bit more about that as well. You can learn a lot more on your podcast, which is called Build Amazing Things (securely).

Laura Bell Main:

Absolutely. So I’m a nerd who loves technology. I was a sci-fi kid. I was always watching and reading things about this amazing future and we’re building most of it now, which is cool. So I interview a lot of technologists who are building tech all around the world, in all sorts of places, from roboticists to people who are changing things in agriculture and everything in between. And we nerd out about how cool the technology is, what the challenges are and where security fits into that. So please do subscribe and if you can ever think of a guest who might want to appear, please reach out because we’re always keen to talk to new people building amazing things.

Melinda Wittstock:

Oh, yeah. Well, have me on.

Laura Bell Main:

Absolutely.

Melinda Wittstock:

[inaudible 00:32:43] on and on about AI and blockchain and podcasting.

Laura Bell Main:

Fabulous.

Melinda Wittstock:

Okay. Fantastic. Well, thank you so much for putting on your wings and flying with us today.

Laura Bell Main:

Absolute pleasure, Melinda. Thank you so much for having me.

 

Subscribe to Wings!
 
Listen to learn the secrets, strategies, practical tips and epiphanies of women entrepreneurs who’ve “been there, built that” so you too can manifest the confidence, capital and connections to soar to success!
Instantly get Melinda’s Wings Success Formula
Review on iTunes and win the chance for a VIP Day with Melinda
Subscribe to Wings!
 
Listen to learn the secrets, strategies, practical tips and epiphanies of women entrepreneurs who’ve “been there, built that” so you too can manifest the confidence, capital and connections to soar to success!
Instantly get Melinda’s Wings Success Formula
Review on iTunes and win the chance for a VIP Day with Melinda
Subscribe to 10X Together!
Listen to learn from top entrepreneur couples how they juggle the business of love … with the love of business.
Instantly get Melinda’s Mindset Mojo Money Manifesto
Review on iTunes and win the chance for a VIP Day with Melinda
Subscribe to Wings!
 
Listen to learn the secrets, strategies, practical tips and epiphanies of women entrepreneurs who’ve “been there, built that” so you too can manifest the confidence, capital and connections to soar to success!
Instantly get Melinda’s Wings Success Formula
Review on iTunes and win the chance for a VIP Day with Melinda