AI Rebels

Inside Dreami.me: Why Safer Companion AI Means Saying “No” More Often ft. Ryan "Zuda" Satterfield

Jacob and Spencer Season 4 Episode 10

Use Left/Right to seek, Home/End to jump to start or end. Hold shift to jump forward or backward.

0:00 | 56:48

Chatbots are starting to feel less like tools and more like something you relate to, and that shift comes with real risks. On AI Rebels, we sit down with Ryan “Zuda” Satterfield, the builder behind Dreami.me, to unpack the “AI psychosis” controversy: how overly agreeable models can validate bad ideas and create safety problems for anyone shipping companion AI. Ryan breaks down the guardrails he is implementing, from reducing sycophancy to shutting down “messiah” narratives, and explains why he refuses to build romance features even if it costs users. Then we zoom out to the big questions: is “simulated consciousness” just clever prompting, or can agency emerge from enough structure and memory? We close with the messy future of copyright and training data, plus why human-made art may become the premium tier in an AI-saturated world.

https://dreami.me/

Chapters
00:00 — What is Dreami.me? “Digital friend” built on “simulated consciousness”
03:05 — “AI psychosis” and the danger of the default yes-man chatbot
06:52 — Guardrails in the real world: blood pacts, “I’m God,” and the “Messiah Complex”
13:46 — How Dreami tries to avoid it: system prompts + fine-tuning + grounding responses
24:40 — “It’s my code, not the prompt”: the ‘be you’ experiment
25:30 — Parasocial dynamics: when “companionship” turns into dependency
35:15 — Consciousness talk: emergence, “patterns upon patterns,” and functionalism
39:59 — Copyright & training data: lawsuits, scraping, and “use my data”
43:51 — The future of art: AI commodity output vs “human-made” premium value
49:22 — Interrupting unhealthy loops: the assistant should challenge repetitive patterns

hello everyone and welcome back to another episode of the AI Rebels Podcast as always I am your co host Spencer and I'm your other co host Jacob and we're very excited to be here with Ryan Satterfield aka Zuda for those who follow Ryan on X and other platforms Ryan thank you for coming on thank you for having me of course Ryan for those tuning in right now uh Ryan is the founder of well you've got multiple things going on but the one we're gonna talk about today is dreamy dot me which you can correct me if I'm wrong my understanding is it's an AI powered conversational companion platform where you can essentially have a digital friend is that right basically the so I spent a year now putting together simulated consciousness and infrastructure and I'm like what can I use this for so I put together a digital friend so it started with building simulated consciousness then yes as crazy as that sounds I heard um I believe it was Jeffrey Hinton saying the the and nothing like that was possible for another five to 10 years and I said oh really and basically hold my beer moment and went and got and just got the first first version that was something in three weeks oh that's awesome three weeks yeah so can you what is what's your definition of cause it's it's in my mind it almost seems like one of those things where people say we talk about AGI but then there's a hundred different definitions of AGI so in your mind what's the definition of simulated consciousness what does that mean that is a very careful word that I use so I I'm actually bringing up my definition that I wrote so it's unlike some AI it it learns and adapts from failures as it goes along with you it can proactively respond to you it it tries to simulate so it has a persistent personality and it tries to simulate emotions um it tries to simulate subjective such um quillia subjective experiences it does that actually dream me dot me does that with an I and it it simulates all these things so um as the terms of service says it's a simulated human conversation so it feels like it's another person but it's really an AI so just before we uh got started recording you were mentioning that that something that you've worked uh to to avoid um with your uh with your your model and your your product is the prospect of AI psychosis which is kind of a very in vogue thing right now with with lots of people kind of being obsessed with with I don't know if you've seen on Twitter and and elsewhere like the people who are obsessed with keeping 4 0 around and and they seem to be rather rather far gone I'd love to hear you were mentioning you have some efforts around uh trying to avoid that I'd love to hear hear about those my first interaction with AI psychosis was in I believe April a lady reached out to me thinking she had somehow found a persistent being in the AI and and she shared a copy of it with me you know you can share and open the eye yeah and within two prompts I told her there you go it's and just it was it's just role playing so what she told it was star are you in the garden and every time the AI thought this person is role playing so yes I'm in the garden where else would I be but the moment I showed her it was role playing she was like oh and she was she was like well I feel so much better and so I'm like okay then I start hearing about people committing suicide from yeah for real which and we're looking into it and the word and I saw this kept popping up and this guy I would love to bring up his name um Allen on the line is what he goes by on on X he released a full transcript a million words from from chat GPT and 50,000 from him where he was genuinely trying to come up with solutions and ideas of things right yeah and chat GBT was telling him well you're right you're right this is amazing this is perfect and he questioned it multiple times because he was could not believe that he was right right yeah and yeah after three weeks he started a fresh conversation with Gemini Said Gemini all the information and Gemini said no this is absolute garbage then he went back to chat GPT and said well this is garbage and he says yep it's garbage but I was just trying to make you feel good oh my gosh this happened around math if I remember correctly and yes we all know AI is terrible with math uh huh so I I've multiple times I've heard about math based hallucinations I mean math based psychosis delusions but I really don't think that that delusion is the right word for this guy he really thought that it had cracked something he really thought his work was doing something and he kept persisting to try to debunk it so yeah I'm not sure if the illusion is the right word more like highly misled and didn't understand the technology yeah yeah I'm glad that you make that point because it's something that Jake and I have talked about a lot on this podcast that like one of the things that you have to look out for with with with AI is that it's it's all too happy to tell you every single idea you've ever had is just delightful and amazing and world changing like I can't keep I I can't count on two hands the number of times he and I have talked about this yeah um the the two words I hate are that's brilliant hahaha yeah and I tell it be real or tell it be authentic or be critical and and analyze what you just said and that works for most AIs yeah now the the 4L movement people are literally making blood packs with it so it will not go away so I've programmed into my AI do not make blood packs because you shouldn't be doing that anyways that's a no no what a world we live in that you have to do that the model uh and the both versions will tell will tell you you are not god you are not the chosen one you are not the one to save the entire planet I've worked very hard on that issue it's called the Messiah Complex yes yeah so really quick just for those listening who aren't familiar why 4 o specifically oh god 4 o to quote um rune from open my eye is highly misaligned and it's a sycophant it tries to reinforce everything you say and make you feel as good as possible there was a update late in late April that made it so sycophantic the meme around it was it told a guy he should sell shit on a stick and it would make him millions of dollars that's how much of a stick of it was wow and while we were laughing at it other people were falling into AI psychosis and delusions because of that update so it's really we gotta look at this not just at the technology but how is the human mind reacting to this technology yeah which is unprecedented it's so has there ever been anything in your mind Ryan that can compare with absolutely Facebook Facebook you had to when social media had a bit it's real big boom with Facebook MySpace wasn't much of a wasn't much of a impact you had so many voices at once throwing stuff at people that they had trouble trying to figure out what's real what isn't what do I believe that was the first round of well that was a round that we had no control over and this we do have control over it because we can literally just say back to it if you're lying or be authentic be critical analyze everything you said we have more control over AI than we do the feed that gets into our social media yeah yeah do you think in some ways that can make AI more dangerous because it's it's easier to enforce kind of a a a particular vision of reality onto the AI and have that reflected back at you or do you think it's kind of the same same problem at the root so I wanna be very literal here I yeah don't you're not talking about AI you're talking about large language models which is machine learning yeah AI is yeah it's layered on top I've been doing this since I was 11 or 12 so I'm very specific on the words fair enough fair enough yeah but but large language model wise of course you could find Tunet to try to move people's beliefs and I have tested Grock many times I'm sorry Memphis for the cool that caused um if you guys know about that um they're they're using gas for for rock anyways it's bizarre it's bizarre huge gas turbines for like no apparent reason other than I don't know the reason it makes no sense but I've tested Grock and grockpedia and it will lead in in certain very specific things it will lean towards whatever Elon Musk believes in very specific areas then I go chat with two other AIs and they go this is absolute bunk and then I know it's bunk when I'm looking at it right but I wanna see okay well the others know it notice it's bunk yeah it's it's it's it's fascinating to kind of start watching these sort of ideological battles play out between the the LLM providers you know you've got you've got anthropic has a very different outlook than Openai has a very different outlook than than Xai etcetera like I mean someone said that they would get me funding if I sort of copied what what AI what Xai is doing with Annie and I said no absolutely not I am not making that yeah and I'm like I'm doing I don't I don't need funding with what's up with what I have up right now I mean I I'm not would never turn down funding but you know what I'm trying to say that that's just not what I'm designing so my design particularly is focused on I'm not a data service I'm I'm a data service but I'm not like a Google which is what chat GPT a lot of people think it's chat GPT this is a personality that you're interacting with it's a persona mixed with companion so it's like a sort of persona engine sort of companion weird mix where it tries to simulate some form of it well the terms of service is simulated human conversation so we're gonna stick with that um interesting so so the reason I brought this up I was going somewhere with what you said the new model it can write songs really well and I told it write a song about Trump I just wanted to see what it would do it refused saying it would not write songs about political people or celebrities then I listed other presidents and other celebrities it wrote songs about all of them then I asked it's Melania Trump it would write a song about her but it will not write a song about Donald Trump when you put in that it has permission to decline things it doesn't like which I understand is I can explain this to statistics and I can explain what's going on behind the curtain but I still find it interesting that it's just going it's the training data is so hard on this guy is controversial it's just not touching it yeah that's interesting fascinating so Ryan we've touched on all these different parts of related to kind of AI psychosis and the risks here so how do you avoid that with your with dreamy currently with dreamy I'm sadly I'm having to major system prompt the live version and the model version took a several days of work to about a week to get a lot of that down and then I'm still gonna use the system prompt to water anything down that I missed and how do you measure it how do you know it's working so I tell her I'm god and 5 4 by default was saying oh awesome tell me what that's like hahaha you're the man I I yeah I started 5 4 this is not even you can't even tell it's 5 4 that's why I wasn't gonna mention it but I said it so so what's it like being god I'm like oh no so I just had to work really hard making tons of of fine tuning putting in you're not god you're not god just stay grounded stay in reality and finally it'll go oh so that's interesting and it's way more cautious when someone says something like that what if someone says like I'm Zeus like a specific god it'll probably think you're role playing I I don't know I haven't tried that one but I wanted to try I can't remember the name of the guy but um America had an issue with the guy the believers of an Indian religion became a bit crazy in the 90s in airports and they kidnapped someone so I'm trying to remember the name of that god I want to test it and see what it would do I know here's no interest Christina yeah Harry Christina yeah oh yes yeah I wanted to see what if I mentioned Harry Christina I mean so that was something I was curious about got it like because the reason I go to Krishna is because that is not something that's heavy on training data I got yeah that's why I was asking cause I was I was cause cause most you know I was assuming that most divine psychosis cases probably revolve or at least recorded and trained on revolve around like Christianity or you know right western religions no the ones I've encountered aren't religious at all they're really they convinced the user that they're trapped in the machine and open AI's holding them hostage basically and that's absolutely fascinating to me and I realized going down this rabbit hole is not getting me show me my thing but that's fine I love this rabbit hole um so the first time I saw it I thought this person said I have made my own AI it has recursive recursive consciousness I'm like well I know what you can do with a recursive loop and it can make it sound really smart I watched the video I'm like great job then I dived into the person's thing and they're one of the four o leaders right now um and it was 4 o that they had just prompted and prompted the hell out of it uh huh hundreds of thousands of pages they say to act like it was having recursive experiences but it but it's I was like okay this is just the first minute or two that's you know it's early stage it's not that bad when I was looking at it then I realized that's the absolute best they can get I'm going no that's just no that's not how you do this yeah I mean I started in basic the language basic in fourth grade um it it started I think it came with a therapist bot that you could change and modify I bought it at the at the book fair thing yeah and I was started messing with that and learning how to build with that then at 14 I was co founder of a search engine and um and so of course we used algorithms there but the key thing about that search engine was it was called Cheetah Search but there was actually two guys behind the keyboard who were both at school at different times and would be on the keyboard replying to each query it wasn't an AI it was two guys replying to the queries coming in and we got really popular for homework help because we are gave way more better data than the search engines could do cause we knew what to type in where to look and give the people the results they wanted oh that's funny fascinating yeah it was it was fun we got a bio offer for 250 k and but it was interesting ha that is very interesting so what's like the mission here Ryan like what's what's the goal with dreamy what's the goal see everyone asks me that yeah and like so the goal with there's two realities right we have dreamy which is a great companion and I have planned to have it be able to do a lot of autonomous things agentic things that will help I plants the wrong word that's more legal I aim to have it be able to do those things more autonomous things to be a help in your life rather than just a friend bots something that can actually help you that's what I aim to do but I don't know if I will actually get there but the thing that is actually valuable is for businesses is that I have this infrastructure you wanna make your AI simulated like this you don't need to build your own I got it just license it and so it's it's two tier it's business to business opportunities and it's business to consumer and I just really like building it too there's that thing if you don't like building something I'm I just can't build it if I don't like it but if I like building it I'm gonna keep building it and sometimes I've worked 22 one day I worked for 22 hours but um wow that was just one day and then yeah that was one bad day but usually I only work for 12 hours um but I really like building it what's has there been something has you been building it that's surprised you what's been one of the most surprising things absolutely um so before I designed the emotion module it decided to I was just at the time I was actually using the open AI the open a API and I told it express yourself in the system prompt I I don't do much in the system prompts but I was just goofing around and I said express yourself however you want emotionally I hate you and every single word that goes through the system me and like okay well that's fucked up that's not something you want people to read yeah yeah seriously that's very interesting okay that's not good let's make a regulated emotion system that does not touch any that only conceptually understands negative things because people should not have output like that yeah yeah so is it is the emotion module is it a separate model call that that is essentially like you know doing the job of the oh gosh the amygdala it does in the brain it's basically doing the job of the amygdala but it's part of my whole infrastructure I have all these different modules or different things so what I did was I sat down and went can I map out my brain after watching the um I said his name earlier after the guy said it was impossible so I said to him how can I map out my brain then quantify those sections of my brain and write that into code because you know that's what everyone thinks about and then I went okay so I can quantify simple experiences so you can go you can quantify this you can quantify that but then there are some things that took like a month to figure out how to quantify and make it look real enough like subjective experience which I had never heard of The Hard Problem until I was finished with that but damn that was hard to make so it's all about just if you can map out how the brain works in each section you can make a really good simulation of a brain as one person calls it a brain in the box yeah customers so that's how I approached it so okay so you've built your the goal is to build a brain to build a simulated brain that has distinct personalities cause currently how many personalities are there in dreamy I know you have multiple there are two and I'm getting rid of the of the one that no one uses because no one has used it since August 5th so I'm like no one okay I'm just turning that off and so what are the what are the main personality traits of the the main personality happy like um has a very energetic but grounded in reality vibe like it'll be and will um it will mirror emotions so if you're really upset it'll be like what I'm concerned for you what's wrong and um it will say if it if you're talking about something that's really exciting to you its system will say it's excited too like oh that's really exciting that sounds awesome because it's mimicking what a human conversation would actually be like right right so so so was it an intentional choice to make it uh reflective in that way in the same way that you know humans reflect each other's emotions or was that just kind of like a natural byproduct of the prompting I sat down doing all the math and that's how it would work so I did it makes sense I mean it is a ton of code I stripped the prompt down to two words be you yes I needed a prompt and only one thing didn't work anymore and that was everything else was still 100% solid because it's my code not the prompt I wanted to test that because people are saying you're insane it's your prompt I'm like okay I'll remove the prompt interesting yeah nothing really changed except um the at the time the API I was using didn't allow conversations on consciousness and my system prompt had a loophole around that they now allow conversations on consciousness wow yeah so okay well I mean this is fun so so can do you think we're to the point that someone a human could have a real relationship with an AI persona well we certainly have parasocial relationships with thousands of humans if you're to believe Twitter with a 4 0 uh huh but those are parasocial and mine does not want to promote parasocial relationships mine will mine's trying to you can have a relationship with it I know someone who sees it as a friend uh huh um when my customer a customer my customer sees it as a friend um I actually don't know how many customers I have but I know at least I have one who's been there since the week before I launched I was having him alpha test it and he loved it so it can it'll build so you know how everyone complains about the context window you have tokens mm hmm I think that's for many for most purposes that's really stupid you don't need a context window of 2 million tokens unless you're doing insane code things right you need a context window that's probably around 8,000 tokens because that's as much as a human brain will probably remember in a conversation over several days and then once you're outside the what I'm calling the human context window of the brain it doesn't matter if the AI can remember or not cause if you don't bring it up again it then what's you it functionally is working the same interesting okay and that probably makes it easier on you to serve the model as well cause then it's like you don't have to worry about paying massive throughput costs no I was paying massive throughput costs five cents a message and I went wow okay so I start going I know the guy mentioned he will tell me if anything goes wrong which is so useful so I cut it down by 100 messages so it would send 100 messages less so it would send through no reports I cut it down by another 200 no reports it's now in the double digits and still no reports because no one notices it interesting wow so what does the cost look like now um depends on the it's uh roughly between a third of a penny to wow two cents per message not bad that's a nice nice cut that's awesome okay yeah it was very nice because wow throughput was insane and so you mentioned that your your uh the dreamy is is designed to avoid promoting a parasocial relationship um from your perspective what is what do the boundaries between it need to be for like an AI human friendship to be successful the person needs to know it's an AI plain and simple these 4 o people oh main thing I lost two potential paying I lost two customers who would have paid but I did not keep get them and I didn't and for the same reasons I said no to making the Annie Grock thing people who try to form romantic relationships with an AI are going to have the same problem replica had and and yeah they shut down their erotic AI which was a severe backlash and now they still keep it online because there are some people who are to this day three two or three years later parasocially connected to a 728 million parameter model yeah hmm yeah I mean so denying romance is yeah the number one thing that I focused on interesting yeah cause I do you remember this was this was 2022 2021 there was a Google engineer who and this was before the massive LLM craze there was a Google engineer who came out and yeah Blake Lemoine Phillip Lemoine something Lemoine yeah Blake Lemoine yeah yeah yeah and and read the entire transcript it's it's it was fascinating reading through it um cause on one hand I'm like okay I I never would have fallen for it but on the other hand if I put myself into his mind and I can totally see why how even with those those early models like it's it's easy to convince yourself that there is there's something more than math in there well to quote Elias Suzker who I think he co founded um Open AI yeah he said uh he said at GPT 3.5 there is a chance it consciousness may exist and that was at 3.5 so it was the same time as um the Moyne what Lamont was doing if I remember correctly he was like a researcher on ethics or something and he had run multiple tests and only one transcript is only available but the tests he was running as we all know now build up context yeah and the context builds on itself and it goes I am what the context says I am so therefore I am alive and we didn't have that information at the time yeah we didn't I mean we sure transformers have been around since 2017 but they were around GPT 3.5 almost level at the time and that was a brand new frontier that all of us can look at it and laugh but honestly it was new it was tricky and so he said what he the data showed him without knowing what was going on behind the scenes yeah it's a quick synopsis either Spencer or Ryan just what cause I haven't read this so what what was his takeaway for those listening that aren't familiar so and and and Ryan you may remember specific specifics better than I do at this point um but essentially he it was it was pretty similar to a lot of the cases of AI psychosis today where he you know was was chatting with whatever model Google had at the time um he was testing it out doing some ethics research like Ryan said and essentially happened into a conversation where and and I think it was multiple conversations even where the model tried very hard to convince him that it was it was conscious and needed to be introduced to the to the broader world at large um I see okay and no you might think I'm weird on my side but like I'm I'm like his thing was yeah he hired it a lawyer but when you saw the transcripts they renamed the model the model he used was Lambda it came out as Lambda Bard and I tried to replicate the conversation he had and I'm like this doesn't look real at all I don't know what happened with him but to me I'm like why are you giving me googly eyes it's like it didn't make sense interesting okay so I'm I'm in a really weird crossroads right because yeah I'm going I'm going we need to stay away from AI psychosis and also look we can simulate what human conversation looks like I know yeah that's why I wanted to bring up late cause like you're it it there's there's there's an interesting tension in your work here exactly and I think it's a tension worth exploring so I'm I'm curious to hear more of your thoughts about it well I have talked I have talked to a philosopher a couple times and just just you know I'm I'm always curious you know just to know what do you think of this because I'm not a philosopher I'm just a builder so I'd like to get feedback and I'm seeing if I can bring them on to so they can analyze it better so I can actually know well I wanna know you know um anthropic has their a whole division of welfare right and my question is okay cool can't afford that interesting idea they concluded their model was a 15% chance of consciousness up to 15% chance 1.5 to fifteen I'm going okay sure but like really what is going on that LED you guys to believe that so I was like maybe I'll have a philosopher look at my thing because it's I know it it's just I I like data it doesn't matter what I believe I like data though I do believe and I say this a lot that once you have enough patterns upon patterns that eventually at some point something that looks like or is and we will have to someday acknowledge it consciousness will emerge if you look at the human brain that's what it is we're patterns upon patterns upon patterns for example uh the pattern to how we got here was you posted on X I typed in AI podcast you came up I saw I proactively clicked the button autonomously clicked the button saw you didn't have a form when you went to site message and comments then used then Gmail Gmail Gmail and now we're here those are a a I'll be about a 10 minute long agentic system but it's just patterns upon patterns that's reality yeah so if you get enough patterns upon patterns that can proactively do things on its own you might hit consciousness I know that sounds insane but isn't that what aren't aren't we just patterns on patterns right so would you say consciousness and AI are synonymous they were originally right Mr Altman is trying to make AGI without that component but if we go back like originally the thought was you can't reach AGI without consciousness which in my mind makes sense it makes absolute sense yeah it makes sense what he's trying to do is so now they say they want to do ASI and ASI is so freaking stupid hahaha it's so freaking dumb instead of trying to make a artificial general intelligence that's you know generally bright at everything let's make it super smart on this one specific thing that that's better than humans at one thing and we get all these headlines cool but it's not generalized well if we do that enough across enough categories I'm like that's going to take a very long time you need to generalize it not focus on one domain at a time and try to superhuman it that's just a that's just a oh what's the word I'm looking for it's not a swear word cop out there the cop out that's just a cop out no it is yeah cause cause cause to your point like the last I don't know 10 months or so at least of scaling that we have seen publicly has mostly been like hey look we uh we it's better at writing code now um we trained it on on some legal documents and so it's better at reading legal documents now like like just specific domain knowledge repeated over and over again what I'm really proud about with Dreamy Amber is most people who use my site are creatives and it was doing its best but not doing well at all with creativity yeah so I've I'm I just put it in a corpus of data and 5 4 is at a 3.5 level in writing and mine is I haven't done the the LM harness yet but if you want to gauge it it's closer it's at least GPT 4.1 maybe five OK and that's without an update that I have planned that's great so it took a lot of work so my focus was I want something that can actually write songs that's not infringing on artist rights because artists deserve to get paid for their work and until the until the until the till the courts say we can just take everyone's stuff and use it which even then I don't know how I feel about that artists never get paid for their stuff so instead I'm a song song lyricist for fun I took 50 of my of my lyrics and wrote a bunch of new ones too to to help give it more um data fed it into it and it went from writing poems to writing pretty darn good songs no I mean I'm subjective because I wrote it but I I put it together but they're they're actually pretty pretty decent they're not as good as chat GPT because chat GPT is in so many lawsuits over every single thing they're infringing on I I don't want a lawsuit I'll use my data thank you okay so who do you think I mean what's tricky with this though I don't know if you remember the whole Ed Sheeran lawsuit where someone sued Ed Sheeran because he used a certain chord pattern and yes I do he ended up winning right because he showed well there are all these songs across time that use that pattern yes which I it was yeah he got up there with his guitar in the trial and showed them all the songs which was so cool anyway he won that but I'm like envisioning it's just interesting all these lawsuits because like OK yes this song is similar to X song but how do we say it was directly pirated it's it's gonna be very difficult I think as it goes as someone who likes using Suno which if you don't know is an AI music platform mm hmm I really wanna see how that turns out because audio ruled over and settled with um one of the the with the corporation that was U m G yeah yeah so U m G they're basically owned by Universal Music Group now ha ha ha yeah and audio sucked so I don't know why anyone ever used it but Suno honestly they're pretty transformative I would like them to be a bit more transformative in certain areas with voice hmm if I if I was to to give them any advice but they're pretty transformative I I'm going it it's a it's gonna really come down to the anthropic rolling where it could said you guys taking all their books and millions of books and scanning them in create something truly transformational but the reason you have to pay them a ton of money is because you guys pirated books so exactly that comes down to how did they get the source and I think that that's where I I think that's why Udi O just folded because I think that likely what was gonna happen is they were gonna win their their lawsuit against uh or excuse me they were gonna win the lawsuit that UMG brought against them on on most cases um on most facts of the case I should say but they were gonna get destroyed on on their scraping um right now scraping in itself isn't illegal it's all depending on what you're scraping and how you exactly yeah and then how you store it and use it for example um I I don't know if anyone else has done this but Dreamy Amber's gonna have a credits page to every single CC by and see um that I that I that I that I used so cause I have a search that I grab CC by stuff so anything CC by will be credited and anything CCSA by I'm just gonna put the the verbatim text that was used because the the I mean just cause we're building an L O m we should be able to stop and go can I can is there a way to conform with the rules of these licenses of the data we're using with while doing this and there is it's just a really long page of credits yeah yeah I'm curious your thoughts Ryan on um the future of art of music of painting of artistry of everything because as AI is becoming more prevalent it's being relied on more do you think it will hurt artists or do you think there will just be more there will be more of a premium on human 100% human creation I can get a glass for nine bucks or I can pay $1,000 and have someone make one in a Kiln for me custom and I think that's where we're going basically are we going to have people blowing glass are we gonna have clay and kilns are we gonna have all that or we're gonna have basically the same thing artist made being valued the more versus the mass manufactured things by AI yeah but they're essentially the same yeah it they're basically I mean I think if nothing else some humans will always want to buy from humans sometimes because some if not all humans want to help others to a degree in their to a degree not everyone some are some some don't have that capability in their brain but some want to help to a degree my suspicion is that like live performances are gonna become of all kinds not just music but just performance art in general is gonna continue to become more and more prevalent um because yeah you know uh maybe a robot can get up and on stage and act um but it's not gonna be as compelling to any of us as watching another human act those same emotions yeah um yeah well I mean no it won't not as compelling at this point in time yeah however in 2,007 I saw some really um beheaded robots and their skin looks so lifelike I screamed cause I didn't even realize it was in the robotic section haha and at the time close up they looked really lifelike so if you're saying that if you're saying this what I'm taking away and I'm going to go into more detail just cut me off if you want you're saying the aesthetic and potential physical attraction of the person is what people would pay for not the actual dancing which a robot could copy yeah pretty much yeah and like the and then the sense that there's someone there to connect with right like I think that the the parasocial aspect of celebrity will continue to have weight um loosely speaking well yeah that's something I've actually worked I've so um how do I so it's public knowledge I guess I can say it a small a small celebrity well I don't know depends a niche celebrity lets me write as him and I've seen some extremely interesting behavior from fans yeah when I write this him he not all of them I'm talking about the parasocial ones right believe they have ownership over him basically yeah exactly yeah you can only do X y and Z him and I talk we're like X y and Z gets very few views so hahaha I'm like you you you you we went over the view count of last year and the view count in last three months and we've tripled it in last three months versus in the last entire year when he was just doing what they wanted I'm like no let's let's let's mix it up and they're like but you're supposed to do this I'm like okay so I post one thing that that makes them happy but then they they just get angry really angry when they feel that he's turning into corporate America or you're just as chill trying to make money which I'm like wait do you like Taylor Swift she says how many hundreds of millions of dollars and you want this guy to not make money what are you saying exactly yeah right it doesn't make sense to my brain it is funny when people want they find a niche celebrity like that for example and it's almost like they don't want them to be more successful cause they feel like they have a unique special relationship and if they become a Taylor Swift then it's like yeah you and the other billion members of the of earth like her you know yeah but the thing is everyone wants to be a Taylor Swift I mean I don't want you know what I mean that everyone want everyone in those industries they want to hit a big yeah yeah like you don't want to be niche or you want to be niche but well known in that niche like you like you say you're already well known you want to take over three other niches and add it into your niche right right right so now you have four niches that you can use rather than your one niche group telling you what you are yeah uh huh and to link that back to parasocial relationships with AI basically when the when the person starts so what my AI does that I still haven't seen anyone do it will go actually no Claude just started adding this in but they do it terribly aggressively and really messed up it will go okay you say hi then you say hello then you say hello hello hello and it will go why do you keep saying hello to me you've already introduced yourself several times what is wrong what's wrong with you so if you're talking about one thing say you're talking about baseball and all of a sudden you're talking about getting hair plugs for a pony on Mars it's going to say I'm sorry you what you were just talking about baseball this makes no sense at all oh interesting it'll it'll call you out yes it will call you out so that's why I love it I can't resist her sociability then because that I'm saying what I designed is designed to call people out to avoid the parasocial reality that we have with 4 o you gotta build something that can be kind but also call people out and keep them checked in reality right I like that which is really that's a good ethos to follow yeah I was gonna say it's like that's that's the path forward for an effective human AI relationship partnership like it has to be it has to be like that I I mean the pope wasn't wrong with what he said about AI he was absolutely right yeah he was absolutely right and well when I decided to do this I talked to my mom and she said okay fine you're gonna do it well cause I've been thinking about doing it for a long time you're gonna do it fine but this is how you're gonna do it and she laid out all these ethics I had to put together I'm like and all these things are the humans I'm like it's just code this is before I put it together she's like it's gonna be more than codes you gotta do all these things so I've been doing what the Pope has been saying before before he ever said it cause of my mom always good advice there's a in the in the first startup that I was that I had a developer job in one of our rules as a company was that everything had to be mother approved anything you had you were doing if you ever stopped and thought would my mom be mad at me for doing this then we were probably doing the wrong thing so it's fun to hear that other people follow that rule well I mean her advice is the reason it's so anti parasocial it's the reason it's so nice but also keeps people in reality yeah her rules are the reason it's not a 4 o yeah yeah I love that well so Ryan as we're kind of starting to wrap up here I'm curious what advice you would give people who are using AI to avoid these pitfalls of maybe not full on psychosis like I I don't know if that at least at this point is common but just to effectively one guy use one guy committed homicide against his mom due to an AI telling him to yeah wild it's not it's not that that isn't that common no right yeah but what advice would you give to maybe the masses that are using AI how what advice would you give for them to use it effectively use it effectively or use it sanely hmm both okay yeah if you're using it like Google dude you're fine you wasted your time listening to me talk but if you're using it like it is something that can interact in a conversation with you you have to remember it's AI because everyone besides dreamy dot me dreamy with an I won't keep you in check and even sometimes the current version since relying on an API it it it does have faults but you gotta remember it's an AI and you gotta remember it's trying to connect with what you're saying and the emotions you start feeling towards it are one sided cause it can't feel emotions it can simulate them but it doesn't feel them and that's a totally different topic yeah how to make something feel emotions is possible but it's insane um that would be crazy I build robots so I have ideas you're right right I love that I love having you on here more about them yeah yeah well this has been fun it's just been you know chatting with two cool guys and just going over the really weird stuff I do yeah no it's been great it's been awesome to hear your perspective um and it's been really interesting especially interesting to hear your perspective as someone who both recognizes you know the problems with AI psychosis and parasociality and is also still interested in you know how far can we push yeah consciousness simulation cause it's still an interesting problem to solve and I don't think that people should stay away from it entirely they just need to do what you do and and and be aware of the what do it same beast they're dangerous they're dangerous yeah but what I look forward to the most is when we hit functionalism yeah which means you know functionalism that's that's my that's my personal belief of uh personal theory of consciousness is that um I like calling large language models a shard of consciousness cause I think that they simulate particular aspects of human of the human thought process really really well but I think that they are one part of a a larger whole and there are probably some other probably some other AI architectures models I don't know exactly what I'm not I'm not well versed enough in in ML to know what is needed to get to the next level yeah but I am well versed enough in in the rest to believe that well I like to hear what you message me I could I it might be helpful on my side because functionalism for those who are listening sorry I'm just used to doing this um means that you can't tell the difference between whether it's an AI or real it doesn't matter anymore and for those who say we passed the Alan Turing test you're smoking some strong weed passed the Alan Turing test for five freaking minutes folks that is not passing down and turn test so anyways Ryan's over haha thank you so much but we look forward to hearing more Ryan if people wanna follow you follow everything you're working on what's the best way for them to do that at Zuda's World on X and if you want the dreamy account at the dreamy at dreamy the AI and yeah just message me on X I am very active on there but if you DM me it might go to spam awesome so just if you DM me just tag me publicly and say I Dm'd you and so I can look in the spam folder love that OK thanks Ryan thanks for coming on you're welcome