AI Rebels

From Hype to Controls: Securing AI Before Regulation Catches Up

Jacob and Spencer Season 4 Episode 11

Use Left/Right to seek, Home/End to jump to start or end. Hold shift to jump forward or backward.

0:00 | 54:46

Most companies are racing to adopt AI, but almost none can explain who’s responsible when it goes wrong. Cordell Robinson of Brownstone Consulting joins the AI Rebels crew to unpack the uncomfortable truth about AI governance, security, and compliance in a world moving faster than regulation. Why is the U.S. is lagging behind on enforceable AI rules, how can existing frameworks like NIST and ISO be adapted to fill the gap, and what does “good governance” actually look like in practice? These are all questions you'll hear answers to. Cordell also shares why annual security checks are no longer enough, and how agentic AI could shift penetration testing from a once-a-year ritual to an on-demand capability. If you’re building, deploying, or betting on AI, this conversation will change how you think about risk, responsibility, and readiness.

https://bcf-us.com/

hello everyone and welcome to another episode of the AI Rebels Podcast as always I am your co host Spencer and I am here with our my other host Jacob yeah and we're very excited to have Cordell Robinson here on with us Cordell is um he does so much in the AI space currently he's currently the CEO of Brownstone Consulting um it's a firm specializing in cyber security AI governance he's also developing AI tools we're you have such a incredible background Cordell so thank you for coming on we're very excited to talk awesome thank you so much for having me yeah so Cordell maybe I know you have a a history with technology with uh the military with all these different entities can you kind of give us a quick overview of your path to where you are now sure um so I started um undergrad uh electrical engineering computer science didn't know exactly what I wanted to do so I joined to US military in the United States Navy actually and and also I want to serve my country um so that was uh really cool I was an intelligence analyst so I was doubling in technology um doing that is very different of course in intelligence but it's still technology um so I did that for almost five years when I got out and moved to Washington DC so while I was in the Navy I was in Diego Garcia and then Rodes Spain which was amazing duty stations then came to Washington DC and I became a software engineer and while I was a software engineer I went to law school cause I I was like well I wanna see where that takes me um and yeah so it's really cool um when I graduated I was like well I don't wanna become an attorney because I thought it would be kind of like boring cause it's mostly paperwork it's not what you see on TV so I was like well let me marry law with technology how is that gonna work and so I started doing some research on what was called information security at the time this was back in the early 2 and then um I looked up several different things and figured out information security would fit and then as information security evolved into cyber security privacy came into place which a lot of more attorneys got into the space but I was one of the very early ones um in the space which was really fun so I was able to use has have still been able to use my legal background and technical background and military background um in the cyber security space which is always fun so I worked um for several companies I ended up working at uh Hewlett Packard for a few years as well as a senior manager I left there started my own firm and decided to do things the way I wanted to do instead of you know going about you know the corporate way cause it was just like there's all these guardrails and I just wanted to spread my wings and and yeah do things in an orthodox way in a way but like in a cool way where I can really tailor fit my acumen to different organizations so that's been fun so I've been doing this for about 25 years now and I've had my company for a little bit over 15 years oh awesome wow that's amazing so okay so Brown Consulting has it been Brown Consulting the entire time yeah Brownstone Consulting Brown sorry Brownstone Consulting um and you started off what was it like leaving a safe job at you at Packer to starting a company was that scary or were you all for it it was the first three months was very scary because there you know uh I had to figure out my own benefits I had to figure out my own 401k I had to figure out structure because you know incorporated lots of structure I had to build the company I had to you know there's all of these different things that you have to do to start your company next codes and cage codes and Duns number and you know SBA Small Business Administration registration and then I also applied to be a service disabled veteran so there's a lot of things I had to do wow to just get started and then once I got started because I built relationships over time outside of Hillary Packard because I never talked to any relationships that they had but because I was in the space I talk I built so many relationships going to different conferences I was able to secure a contract within about two and a half months so it wasn't bad and so I finally got paid after nine months of being in business got my first paycheck so you know and it was pretty large so I was then I was like on a roll and it was no looking back then so fast forward in a little bit um you know to to more recent years what would you say has been kind of the biggest difference in in the nature of your work um both day to day and and broader you know grander scheme and things as well with the advent of so the my day to day actually especially recently has been actually I wanna say great because using the LLMS um and also Agtech AI has really helped uh with efficiency with um right my business um so I'm able to get things done faster more efficiently um with more accuracy as well uh which is great um but I don't do it as fast as a lot of others do it just because I'm in cyber security and so I don't just download any AI tool and I just don't yeah I just don't put any type of information in there because yeah it's very dangerous yeah have you have you encountered any uh cause cause one thing that that um one concern that people have raised with with AI is possibility of of data exfiltration through you know clever prompt engineering things like that um have you encountered any and and perhaps this isn't a an evaluation you do but have you encountered any customers who have had that problem had an AI system deployed that that exfiltrated some sort of data that so I haven't had that happened yet but I've been talking with different companies that are deploying um Agtech AI and so before they deploy thank goodness they did do talk to me cause I warned them like don't just deploy this without guard rails don't just let it run on your system and have your system open to the internet because then it could yeah snap you know snatch your data and just send it out to whoever it wants to um because even though like AI doesn't really have a mind of its own it's it's always learning and so since it's always learning if you make a mistake and put in the wrong prompt or something like that then it can do some things that's unexpected so I tell people to educate themselves more so so I haven't run into that just yet but I have talk to lots of business owners CEOs that have made several other mistakes and I'm like uh delete that don't do that so it's it's it's hilarious on what people use it for I'm like you can't you should not do that yeah do you have any like specific examples that come to mind that you can share yes a few yes so one person said that they use AI as their therapist and I said well what do you tell it and they're like oh well everything I'm like do you know it's not a human so it all of your information is now on the internet basically and so like your deepest darkest secrets can get out there and it could cause like a lot of problems and this person has a very important position so I'm like don't do that um and then another it's it was it was crazy I'm like they're like oh well I I just you know it's like my friend I'm like no it's not she's not your friend I think it was scary probably yes six months ago where it was either anthropic or open AI turned out that a bunch of the uh the uh the private chats that were supposed to be private turned out to be publicly indexed yes to your point like you never know when data you put on the internet might be um yeah exfiltrated ha ha ha exactly yeah yeah yup and then another person they uploaded all of their taxes personal and business taxes to AI oh goodness I said why did you do that I said someone can steal your entire life your whole identity and become you I mean that's everything so I thought that was like why would like it didn't make any sense I'm like go to a CPA haha haha just just get some help let them do it let them do it yeah yeah let them put all your information into an LLM yeah seriously so did you see on that note that recently I mean chat like chat GPT open AI is trying to integrate with into it for yes tax purposes uh huh do you think that's realistic do you think people will like is there a future with that sensitive of information like tax information things like that for AI or do you think it were a ways out before we should use it for that I I think we're a ways out before it will be secure I think that people I think that they will still put it out because they want to make money and I think people are going to use it and I think that there's gonna be a lot of lessons Learned before they put guard rails around it because people like to react to things instead of be proactive'cause they don't think about it like oh this is great this is fast okay that doesn't mean that it's safe so that's the problem it's the it's the dichotomy between you know it's gonna be fast and safe and it's available and not thinking all of your information is gonna be on the internet so when you say guardrails like do you have anything in mind of what that might look like yes so uh the guard rails would be looking at so there's control sets like ISO forty two thousand one uh Ness AI Risk Management Framework they have like small control sets that says this is how you should govern your AI tools so it has uh it it tells you okay with when you're dealing with uh language models do this and don't do that and then make sure that you uh configure your AI a specific way so that it your information is not going out to the internet um most people don't even know that these things exist so they just use it and then things happen like bad actors come in and steal your data and the problem is is that they sit on and just collect data they could collect your data for years and you don't know about it until gosh that's crazy something happens until they have a reason to use it exactly they wait for the perfect moment yeah yeah hahaha yeah I think that's something that quick tension I was gonna say that I think that's something that people don't fully appreciate about cyber attacks is a lot of them is uh are just kind of passive information gathering efforts and then eventually you know a hacker one day is like I'm gonna go through this text file and they happen to see that they collected some personal information they're sweet I can sell it exactly so it's it's not a threat that people are well I'm not equipped to stay aware of I guess I'd call it right cause there's so many especially like you know not not just here in the United States where so many foreign adversaries are sitting throughout our networks just collecting data all the time um like there was a CNN not CNN but 60 Minutes October 12th where there was Chinese sitting um yeah in a water treatment plant while they found out later the NSA had to alert them that this was happening and then come to find out later it was over 200 municipalities across the country and I'm like you guys didn't know this that they were just sitting there so they could shut off the water they can they can do something to poison the water they were in electric grids so they could shut off power grids they I mean I'm like it's crazy it's insane and they could just do it in like little small pockets where it's just disruptive and then you know America's we're so dramatic so then everybody will be so hysterical like oh this town they people got poisoned with the water and and then it's this big thing and no one really knows until like they check their network and see that you know foreign adversary has like screwed them over and been sitting there for years which is crazy yeah like how do you not know this wild that is crazy like have we not thought to check this or have some sort of yeah I mean so what would you say Cordell for obviously it's kind of different an individual versus an entity like a company but maybe for an individual like someone listening to this podcast if you could give like 2 3 pieces of advice particularly with AI now that it's just proliferating so much and it's everywhere just to be safe like how how can someone be safe right now when there's so many opportunities to lose your information um yes so the first thing I would say is never put your personal information in any AI so do not put like your I mean your name is your name right but don't put your date of birth don't put your Social Security number don't put your address don't put um like GPS settings to like different places that you're going don't put any type of banking information in there just don't do it um it's not necessary and then also use AI as a tool and not as something to do your job for you because then you're gonna create more dangers because if it's doing your job for you're trying to have it do your job for you there's gonna be so many errors and mistakes and then now you can cause harm to like your personal data is if you just try to like oh well I'm gonna I'm gonna have it balance my uh my checkbook why would you do that like that doesn't make any sense right and the LLM is not going to be aware of what liabilities you are subject to like there's been so many examples over the last year so and I'm shocked how how often they keep happening of of lawyers who get just absolutely dumped on in court by a judge because like you know judge is like hey I looked at your case and it's clearly chat GPT none of the you know presidents at sites are real like right I'm gonna I'm gonna censor you like and now this dude's in trouble cause yeah to your point like he he used it to do his you know he had it do his job instead of using it as a tool of hey um here is a case cause he could have easily gone and find found case precedents by himself and then you know Fed summaries into chat GPT to get alternate perspectives etcetera but that's not what he did he he tried to try to you know take a shortcut ha ha yep exactly which is ridiculous because you know all attorneys get access to Westlaw so you just use Westlaw it's on the internet you could pull all the case and the precedents is and do the research and it's not it doesn't take that long even though chat is faster but chat is so inaccurate you have to be very accurate with your prompts so if you're an attorney most attorneys are not technically technically savvy they're just not I've worked with a lot of them and that's just not what they do so why are you putting prompts into technology and expecting to get accurate data that doesn't make sense when you're not a technologist right yeah yeah and it's even particularly I mean like GPT 5 5 one is incredible and it's getting much more accurate but you also still like you have one you have to know the information to be able to validate it I think people forget they try to use it for things they don't know right and so then they're trying to validate and check this information that they they don't I don't understand how people think they can check it if they don't know and then they end up inventing a a universal theorem that solves all of physics it's amazing it's right just exactly amazing exactly I would say another piece of advice is before using these different AI tools become a very good researcher cause if you're a good researcher on the internet then you'll know what information to put into your AI tools to get more accurate information if you're a bad researcher then you're gonna get bad information I'm so glad that you brought that up cause I think it's a point that a lot of people ignore they just think that uh and and and to language models credit um they are very good at search I'm not trying to say that they aren't but people really underestimate how much value there still is and knowing how to Google yourself and validate you know the things that the language model is telling you a lot of times it doesn't take that much work to validate because usually they've gotten pretty good where usually most of the errors now they make are are pretty easy to spot and you're like well that's and they're usually pretty good at the small stuff um but yeah um too many people think that it's a substitute for for doing that yourself sorry my cat is going crazy I don't know what he's up to no problem you're fine um yeah and it's also just best practices as well like I'm I just I use chat GPT all the time and it's just there's just certain best practices like you just after it gives you an output you say okay now is this right check this is this information right and provide me links to the sources that you pulled this from and even still it will often say just yesterday I used it to run through a LinkedIn article I posted today and it gave me a fact and I was like that seems strange and I asked it and it was like oh yeah I can't find this anywhere on the internet I was like what where did you pull this from um so it's just it's also just like a gut feeling like if you see something if it tells you something that doesn't seem right like it's not it's not the all knowing god it doesn't exactly the point that I like to make to people is like it's a lot like your coworker Kyle like he's really good at your job but you should always double check everything he says to make sure cause humans are wrong all the time yeah exactly so trust but verify exactly I tell people it's only as smart as you are so whatever you know or you're asking that's what you're gonna get back and if you're not gonna get like you said links references then it's not then it's not verifiable you know and then if you want to really use it properly go and Google and find the documentation and then put that documentation in there for to search it that actual documentation then you're gonna get more accurate data cause it's actually looking through something that is legitimate yeah it has that knowledge I think that you can use it to learn something new but the the the thing that people miss is that you kind of you have to know what you don't know to learn from it right like you gotta be asking very specific questions like I do not know how to you know just spitting off an example like I don't know how to go set up an inference pipeline in Python on cloud um here are some specific things that I don't know chat GPT help me out um right it's it works really well and in very bounded advice the the exact more the broader your query the more likely you are to get somewhere weird yeah exactly and do you train it too it's like a child you train it yeah mm hmm yeah exactly and I like your point Cordell that it's only as smart as you are cause I think there's all this hype right now where it says oh it's it's like a PhD in your pocket and I'm like okay but if you don't know how to use like the people who get it to act like a PhD are PhD's are PhD's they like I saw like a I saw like a yeah like a math professor talking about that he's like man like it helped me like you know it it reproved this novel theorem that I proved for my PhD and it's like yeah that seems really likely I don't think I can get into that right I don't know the first thing about formal proofs right exactly exactly um Cordell so I'm curious so if we go big picture with let's start with brownstone and then I really wanna hear about these tools that you're developing and so brownstone what what's the like a new customer comes in what are they typically coming for you to you for right now so right now they're coming to me for a lot of compliance so there's different uh compliance frameworks out there there's of course NSA hundred 53 there's cmmc there's GDPR there's ISO um and then there's uh AI governance so they come for one of those things there's requirements like for federal government uh clients so you know companies that work with the federal government are required to be satisfied with certain frameworks so we basically walk them through so we do one or two things we walk them through and prepare them for an audit so that they can get certified to work with the to get contracts with the federal government and these are multi million multi billion dollar contracts um or we actually do an audit on them and so we'll give them an audit and and provide an authorization to operate so that they can utilize that to one get other commercial clients or work with the government and sell like their products so like if you have some software will what do what's called a white gloving of the software which is a whole process that takes about six months to do according to how complex it is and then once it's done they have all of the documentation and all everything technically set in place as well because we do technical tests as well it's not just a paperwork drill like most people think compliance isn't like it's not all paperwork there's actually a technical piece to it and it's like an entire umbrella so we just prepare them um for that or we do the audit themselves according to what they need um got it when it comes to AI uh we just have a AI governance framework that we work with um we utilize ISO forty two thousand one um the ness AI um risk management framework and then also NESS853 which is not AI at all but there's a mapping to it so we can dive deeper because the AI frameworks are a little high level and then the 853 gives us the ability to really dive deeper so that they can understand where they're going with and then the other thing is we're working with a financial institutions banks on their security like their incident response like a breach happens like recently there was a jackpotting that happened at an ATM where it was like a whole series that occurred so they had one person go in into the ATM and install um malware then they had another person go in and unlock codes then they had the other person go in and take out$175,000 at once which is crazy that's crazy this just happened last week yeah oh my gosh I always wonder with people like that I'm like obviously these people are very smart very detailed I'm like what could you do if you applied that in a productive way right they could do so much good with that and I'm like why are you doing this like it doesn't make sense so yeah and I'm yeah and I'm talk gonna talk with some credit unions here in the next few weeks because you know with that you know that can occur again and just because just because they installed that malware on one ATM machine is connected to a network that means that it can traverse throughout the entire network and they can just travel around the country and just go to ATMs and just start pulling out money and I mean there's there's literally a song on YouTube that teaches you how to do this um I'm not joking if oh my gosh if either of you are not aware of it you should go look it up I know I will like it it walks you through it literally walks you through this exact process it's like first you go download a program onto this USB then you do this it like it walks you through it step by step that's why it's hilarious but like it's a little bit sad hearing that maybe people are taking its advice um wow it's crazy it's just a it's a whole new world of of information trans you know transition for sure yeah so interesting well if I'm not here next week Spencer you know I listened to it and I took the advice I'm living on some island body and Clive yeah exactly um so Cornel with AI governance specifically what would you say is the current state at least in the United States the current state is not good in United States because there's no AI law yet there's no legislation yet so the EU has the AI Act United States just has the NIST AI risk management framework that's not a law it's just guidance on what you should do and so and and on top of that I was at a a CEO conference a couple weeks ago and I just was randomly asking oh so what are you doing with for your AI governance because everyone was talking about oh AI this AI that like all these crazy tools that use avatars and I'm like well how do you protect it oh I don't haha really I was like that's interesting I'm like well I think you may need to I think about protecting it and so with those conversations I really was alarmed saying that here in the United States and these are multi million multi billion dollar companies so I'm like okay we need we have some work to do here in the United States so hopefully our Congress gets it together and create some legislation around this hopefully you know this leads perfectly actually into a a question that I I've been dying to ask about AI governance um so without getting too explicitly political on this show cause we don't like to do that but you know inevitably politics touches this um the White House is once again trying to push through a a and minor or not a minor executive order what mechanism they're using but essentially trying to ban states from implementing their own AI controls on a state by state basis we have a friend Aiden Logan at Open AI who has been a previous guest and he's fond of saying that you know there's a there's a common misconception with Formula One racing that the brakes are there for to slow you down um but you know in reality the brakes are there to keep you going as fast as you can around corners and so he kind of views regulation in that way I'd be curious to hear from your perspective do you think that it needs to be like a nationwide you know Congress needs to implement a nationwide law of like these are the regulations that the each company must follow or do you think that it should be up to the states on a state by state basis for for those populations to decide you know how they so and I mean this is a very strong opinion it should be national it should not be state by state the reason being is because the internet crosses all state lines yeah yeah period AI crosses all state lines there's no delineation between here in Boston New York uh California Utah right there isn't right and so if each state has a different law how like if someone breaks it how do you how do you prosecute that like how do you go through those measures so now you're gonna spend all of this money in court for what when you could just have a national law so I mean that and that's just what it is I mean it had you know nothing to do with anything political it's just technology right you know yeah yeah um we have the National Institute of standards and technology for a reason and everyone basically follows that right and so with AI it should be the same thing um I know that sometimes some states like California will roll out they're rolling out the CCP which is like GDR but then that CCP it's gonna spread throughout the entire United States and it's gonna become national it's just that California has more privacy than any other state in our country so that's why they started it first but a lot of other states uses their framework anyways so just and also it just makes sense putting on a national level so that there's no confusion between state lines because it becomes very confusing because some states are Commonwealth and some states there's just so many different treaties and laws in different states yeah they might as well be their own countries at some point if they want to separate it all so I think yeah a national governance is what the way that should should be and do you have any obviously maybe don't have it all built out but do you have any thoughts on what that would look like nationwide um a little bit so actually just this morning I was uh rumbling through I was reading through um the ISO which is the international standard I was reading through the ness what they have which is not is really not much at all I'm like oh my god this is a long way to go but I was reading through that I was reading through uh some other international laws that some different countries are using I was reading through the Euia Act and I was just aggregating and putting together some data to kind of make sense of it um so I think that what it would look like here in United States is just like NIST 853 where they're gonna have uh a framework and in a control set and then it's gonna be like FISMA which they'll have a law so they'll be like AI FISMA whatever they want to call it which will be the law for securing um AI so I think that's how they'll probably roll it out the same way and then they'll have to of course every so many years like I believe every like 10 years they update it so they of course have to update it as technology becomes more advanced and I think that AI is moving so fast that you know they're kind of scared to like throw it out there but they need to hurry up and like put it out there because like it's moving fast right that's been my my impression as well like I I'm a big fan of AI but I I'm also concerned by the fact that like no one seems willing to even attempt cause I I do strongly believe that technology should should be regulate I lean towards you know a a lighter regulation mm hmm um it when it comes to technology uh lighter regulation regime overall but it's like it's clear that you gotta you gotta have some sort of governance and some sort of oversight before it right so yeah I and I believe in light regulation as well I think it should be more so guidance and and and recommended guardrails but in also reading the EU AI Act and I've gone over it several times that regulation makes sense because it's about your privacy it's about okay right you cannot use AI like CCTV cameras to scan someone to tell if you know that you know to tell like their race or to tell like yeah these different things like no that's that's that's like invading people's privacy so you can't do that or you shouldn't do that um or you can't you shouldn't use it to like detect certain things that are very personal to you so of course those things should be in place but other things like you know what someone's thinking no one needs to know what you're thinking like that's too much yeah that gets real spooky real quick yeah um yeah okay so overall it sounds like you kind of like the way the EU has approached it and yes I think I understand why why a government would be nervous because like you said it's like the there's two sides of it one it's moving fast so we need regulation but two it's moving so fast it's hard to build regulation to keep up but I think an approach could be taken kind of like the EU where it's it's not in the weeds it's it's a high level framework right um and I think that's really the best the best you can do with something like this that's moving so fast and so dynamic right yeah just keep it high level and make sure like the the legislation all the legalities are in place and then the rest of it like let the lawyers battle it out in court you know and yeah you know add a lot of I don't yes and add some AI legal courses in law school yeah it's gonna be needed that should be happening really fast now yes yes yeah it that has been a whole oh it's just fascinating especially with professions like I'm in a CPA like lawyers like I think there's gonna be a lot of growing pains for people who are already in the profession to change yeah and but what hurts is when those people don't want to change the system for those rising up right like yes they say oh I didn't have this in law school so you don't need it either or right like why should you get to use chat GPT to help you write a paper when I didn't it's like well right cause it didn't exist and now we have this capability that we should learn how to use to further the human race exactly I get very frustrated when people use that mentality to try to slow down progression cause I I love AI and I agree there should be boundaries but there should also like people need to open up to it I think that we're going to this is this is slightly tangential but something I've been thinking about lately so I'm gonna I'm gonna talk about it for a second ha ha ha um I think that on on a similar note I think that we're going to need to add some new curriculum in in in terms of computer literacy like I think that prompting is going to be the skill that needs to be taught not necessarily because prompting is all that hard but because there's a growing body of evidence to suggest that capability your your ability to use an AI tool is not necessarily directly proportional to you know your expertise in in in your field but rather your expertise at giving accurate instructions right it's a it's like a project management skill yeah exactly it's a distinct skill in of itself and I think that that's where some people get frustrated and don't see the potential with AI is like they aren't good at giving instructions they aren't good at prompting um so then they're like oh well it's all garbage and it's like well no a lot of it is garbage but not all of it right there's there's a there's a very strong portion here that works exactly well I tell people this that are resistant I said okay think about it when the telephone was invented and it came about and it then became you know popular and everybody began to use it those people basically carrying messages back and forth they had to convert their jobs so they became mail carriers or they worked for the telephone company and they had to learn a new skill same thing with writers writers were writing in with their you know pen and paper which people still do I I still write um but then you have the typewriter and then you have now the the keyboard right that didn't end writing but it just changed the way that we do write and people had to learn how to use keyboards that it just technology is going to happen that way where you just have to learn the same to smartphone you went from you know telephone to now the smartphone that does all these things you just learn the the technology and and just keep moving forward because we're gonna we're in a progressive world so we're the technology is gonna continue moving forward whether we like it or not so and and we learn until we die so just keep learning a new skill and you don't have to learn a deep dive just learn enough to function at least yeah yeah I totally agree I think I'm curious your thoughts on this Cordell and Spencer cause I've kind of been thinking about this cause I also I've tried to like in the advent of AI to other technology right like it's like the internet in these ways or it's like the telephone in these ways and I've tried to figure out if it cause when I come to like the telephone right it's like okay we had the telegraph which then kind of turned into the telephone and there's like a clear like one kind of LED to the other in this mindset is there a similar story for AI like these LMS is there like a clear it came from this technology that was widely used it almost seems like an entirely new process and technology that's a very good question um I I think it came from programming but unfortunately most humans have never programmed before in their life right so yeah that's a good point that's the problem with it is that if you've never programmed anything how would you how would you know how to do the prompts properly yeah yeah but also it comes from googling and researching as well if you wanna you know bring it down a notch googling and researching kind of similar if you Google something very generic you're gonna get all of these things that doesn't make sense but if you're specific in your Google you're gonna get more of what you want so kind of maybe tying into that but other than that it's so advanced in a way because it is so powerful but it's not that powerful at the same time if that makes sense mm hmm yeah no that's that's that's the one that I was gonna compare it to as well is is search internet search um cause it's spiritually feels like an evolved form of that to me there's a there's a quote from David Bowie back in the day where he's talking about the internet and he's talking about the the you know the uh interview and I'm gonna bash the quote I don't remember the exact specifics but basically the interviewer is like you know it's like isn't the internet just basically you know a bunch of people putting you know newspaper online basically he's like no no no like the internet is a is a animal in of itself like you know it it responds and and develops itself in ways that are you know similar to something alive um and I think that in a lot of ways language models are kind of the the the full realization of that yeah you know a massive corpus of of human knowledge that organically and I'm using that word very loosely here obviously organically organizes itself into something more yeah I like that that's interesting yeah it's it's one of those things where it's like it gets that line of of of you know the what's the word what's the what's the phrase what's the you know sufficiently advanced science is indistinguishable from magic right like sometimes language models can feel that way where it's it's spooky how good it can be how life like yeah yeah yeah for sure yeah it's interesting it's an interesting yeah thought experiment but I guess in that thing and we've been talking so much about AI Cordell I gotta what are these tools that you've been developing I'm so curious sure so one I just released on the app apple and Google App Store um it's not AI but it helps uh security control assessors and information system security officers do their job oh cool faster and more efficiently because I found that in the industry there is a issue with a lot of people that are horrible at writing and I'm like how did you graduate college if you can't write it's like very interesting right and so because they write how they speak and I'm like you can't write how you speak you have to be right how you're taught if that makes sense right and so I came up with this this app that basically helps them do assessments with Physma um Fed ramp and skater right now and we're adding AI governance and CMMC in the next iteration of it and they're able to go through it it gives the questions that they would ask it gives the implementation statements that they would write they would have to change it a little bit but at least gives them the baseline so they're not writing from scratch because when you have people write from scratch then you're having everything looks is like so different so it needs to read uniform across the entire document or documents and then it also gives suggestions so that they like plain language so that they understand it more especially for very junior people they can understand and they can comprehend a little bit better because the language is so complex from this so it's gonna really help them and then the other tool I'm working on is a genetic AI tool um I got it this idea from someone else that has already created it and they're selling it and all that so I reached out to them never heard back so I talked to some friends and they were like oh we can do that and so I was like okay well let's sit down and walk through like how would we develop it so we began looking at agents and looking at uh how would we secure it um it's a recreating an Agenic AI penetration tester where basically you don't need an entire team of penetration testers you would need I would say two I'll never say one because you need a you don't want one single point of failure you don't want one person having all the keys to the kingdom but two instead of five right and then you go in install it it just runs and not even full time these are like part time people so you're spending less money but they know how to basically fix or make changes and it runs on in your environment whenever you want you'll be you know everyone is trained that you know in the company whenever you want it you click and you get a report on your system so you'll know if somebody is sitting on your system at all times instead of what cause normally now it's once a year that a penetration test is done in so many environments well someone could be sitting all year and you not know so if you have this yeah this tool at any time randomly you can be like oh let me check and then boom it spits out a report and so we're just working out the kinks on making sure that it doesn't break your system because the agents are very powerful and we're making sure that it's secure where the agents are pulling the data and that the data is not going anywhere it's gonna be encapsulated and enclosed system so yeah so those are the two so far we're working on oh interesting yeah was that first tool uh was that all at all inspired by by legal tools like uh oh gosh Westgate is that what it's called Westlaw Westlaw yes thank you yep yep it was inspired by Westlaw um because um attorneys use Westlaw all the time and it really helps them with you know finding case and also writing presidents properly before they go before judge and so this was this is like the cyber security version so that they can use it um and they're gonna be more efficient they're gonna be faster cause right now assessments take three to six months to do this will cut that in half yeah no that's that's really interesting cause I'd I'd never um my my dad works at a cyber security company so I you know I have some some very rough familiarity with it and I'd never thought of you know the need for standard language for uh penetration tests it's really interesting I'm I'm curious about the agenic platform um that that you're building is it um so is it bespoke for each company that they you would kind of help install it apply it for that company or is it a they would buy it and they could easily implement those agents it's a hybrid so they would buy it and then according to their environment the agents would be configured for their environment so like the generic agents would work but when they want to do a deep dive with the agents then we would have to do some some light configuration not anything deep because the agents are they're good and so they just have to tweak it a little bit for like for example if it's a hospital or if it's a bank or got it um if you know so the different types of environments what is what is it going to go out and look for to find the um the viruses or somebody sitting on your network and things like that because every everything is configured differently in different environments but it's just small little tweaks so it's not gonna take much time it's that same two people that are going to monitor basically part time to make sure that nothing breaks and make sure that if there's any issues um with the tool with the agents that they can just modify them got it OK cause yeah this is interesting cause penetration testing is like a very like you have to be very technically sound to be able to do penetration testing um but also I this is something I feel like AI is good at right like yes we see it with like programming like all these things it's it's very good um so I'm curious when you were see this is just what's so cool is I'm sure you used AI to build an AI app it's like this incredible cycle of AI bettering itself and building these new tools um so I'm curious how did you what was the process like to build these agents so that's the thing so you have to build the agent so you have to basically write the code for the agents so you can use AI to create agents cause there's AI tools that create agents but we build the agents from scratch because if you use a tool then that tool has data in it in those agents that are vulnerable and so because we build our own agents from scratch we know that they're secure and they're not vulnerable yeah cause you own it you own the whole background right yeah it's our intellectual property yup interesting what were some of the biggest challenges in doing that and building it yourself securing it putting down putting putting the um writing the secure code around it and also writing the code to make sure that and we're still working on it writing the code to make sure that it doesn't infiltrate a system to shut it down because you can make it so strong where it can just disable your network or it can lock people out of their accounts or it can change people's passwords or it could it it so many different things that can go wrong yeah so that's why like we have to like write it out and then we have a like you know close network and we just test it on that network constantly to to work out the bug so it it takes about a year year and a half to to complete because it's hours and hours of like yeah testing and reconfiguring and rewriting the code did this make you more excited about AI more scared for the capabilities of AI I'm curious how this altered your perspective on artificial intelligence um it made me both more excited about it in a way that um this gonna force more people to learn AI cause like if you wanna keep a job you have to learn AI but a little scary as well because if the it gets in the wrong hands it's going to be a problem yeah because people can do very dangerous things but at the end of the day people do very dangerous things on the internet people do very dangerous things in real life by just trying to break in your house or rob a bank so yeah at the end of the day we're gonna have issues the ATM example from earlier exactly yeah I I kind of like this perspective cause it's like people blame the tool we it's been a while since we talked about this but I feel like we used to talk about this all the time but like it's often it feels like sometimes you're using a hammer and you smash your finger and you're like this hammer is so dangerous this hammer did these things and it's it's not the hammer it's the person holding the hammer it's exactly like AI still maybe I'll have to take this back when you know in five years I don't know but currently AI is still just a tool and it's operated by individuals the most dangerous uh part of this is humans uh huh yeah we're the most dangerous yeah yeah no exactly I was gonna say like you can always make valid criticisms of a tool right like oh this hammer it's it's handle is way too slippery it keeps flying out of my hand when I when I swing back right um but that's not what people are doing in a lot of cases right it's not it's called being irresponsible exactly like they they want to this is this relates to a broader issue I have with culture in general right now and I won't I won't crank on that but like just basically I just feel like people want to abdicate their responsibility to corporations almost and and and blame them for everything it's like and you know go ahead like I'm not you know I'm not gonna I'm not gonna actively tell him to be quiet because like I don't really care about corporations but also it's like take some take some responsibility in your own life and and acknowledge you know that that humans you as a human are capable of using a tool for good and evil right and then go use them for good exactly cause people complain about corporations or about like billionaires I'm like you made them a billionaire like they complain oh well Amazon make all this money I'm like don't do you don't you order from Amazon and get deliveries all the time hello I got I got big opinions about Amazon and but like at the end of the day it's like I you know my wife orders something on Amazon it gets here later in the day later in the day I'm like oh cool right like you gotta acknowledge to some degree you're a hypocrite to live in the world today yeah so it's like I I always tell people like come up with the that that that idea you know that that idea that that person came up with that was brilliant and that they spent gears in their garages and basements to create I mean it is what it is I mean I don't I don't always like it but I mean if we're gonna spend our money same thing with AI yeah if people are gonna use it can't complain about what it's gonna do just yeah don't be irresponsible yep yeah I totally agree I am very fascinated to see what the next year to hold or even the next six months cause it's moving so fast it's like there could be a huge breakthrough at any point that's what's I feel like we're just like we're all just like on edge we don't know like next month something crazy some huge development could happen it's just yeah this exponential change that you don't know when it's gonna happen um okay and the point that I like to make to people is we don't need artificial general intelligence for things to get much weirder with AI um like everybody talks about like oh AGI is coming and the world is gonna change it's like well AGI might not be coming but the world is still gonna change guys like that's correct yeah people have this weird Messiah narrative yes they do they're so stuck on that yeah yeah yeah that's in some ways I think they they kind of miss what's here already and and and don't fully appreciate it cause it's they're just like waiting yeah yeah cradle did you say the name of this tool maybe you did and I missed it so the uh the app that's out is called Compliance Aid compliance aid okay yeah and that's the comp that's the compliance aid tool and then is there a name in the works for the agenic haven't thought of a name yet cause I'm trying to think of something that's really cool and catchy and really like in a way kind of like alarming you know to kind of really throw people off so I'm trying to think of something so so it'll it'll come across like over time I'll run into something be like oh yeah that's pretty good yeah yeah I'm excited to see what it is then it's gonna be a good one yeah Carol do you have any just general advice you've given a lot of really great advice for all walks of life but for those who are new who are not technical who are just seeing this rush of AI do you have any advice for these people to adapt to not even adapt but like thrive how how they could thrive in this environment yes uh YouTube is your best friend so I say jump on YouTube watch some videos on it and and put it into practice you know people spend lots of time you know watching TV or scrolling on TikTok or scrolling on you know whatever social media instead of doing that take some time to watch some videos and actually like learn something because at the end of the day if you don't know you know how to use AI you might end up in poverty that's just reality and yeah the other point that I like to make people is like you can learn about AI and you can still not like it um I like AI and I'll argue that you know people should use it but it's totally valid to like a or dislike AI but you should always learn about it so you understand what it is and if you're arguing against it you should understand what you are arguing against exactly yep yep I totally agree so Carrell if people wanna follow you wanna follow brownstone these tools you're developing what's what are the best channels best platforms for them to do that sure my LinkedIn which is Cordell Robinson um also brownstone consulting firm on LinkedIn and then uh Instagram is brownstone underscore underscore consulting underscore firm um and then uh I'm gonna have a Facebook and a Twitter ex coming out as well soon oh cool yep awesome great yeah let us know that we'll include it all OK yeah perfect love it OK well thanks for that we're excited to see we'll have to have you on again here in a few months cause it sounds like awesome things are changing quick yes one unknown agentic app I know can't wait for the name yes we'll talk to you soon awesome thank you