AI Rebels

Tarifflo ft. Bryce Judy

Jacob and Spencer Season 2 Episode 14

Bryce Judy rejoins the AI Rebels crew to discuss his latest initiative, Tarifflo, a project aimed at revolutionizing the classification process for trade and customs through automation. Having pivoted from his earlier work on language translation for low-resource languages with the app Pendian, Bryce now focuses on addressing inefficiencies in HTS (Harmonized Tariff Schedule) classifications, particularly using AI-driven solutions. He also shared insights into how his work on low-resource language translation continues in the background, with a renewed framework designed to tackle underrepresented languages more effectively.

hi everybody and welcome to another episode of the ai rebels podcast we are very familiar we have a familiar face on the show with us again today bryce welcome back thanks for coming on thank you i'm stoked to be back good talking with you guys again yeah we're excited to for those listeners who didn't see the first episode with bryce uh bryce well i don't wanna give anything away but bryce student at in utah developing um an app to essentially it was called pendian you can go look it up season 1 um language translation is that fair high level language translation yeah yeah specifically like databases for low resource language translation but yeah right along those same lines and we're just we're very excited we were talking just before we started recording and there've been some big pivots in the recent months can you tell us like since we talked it was probably five months ago five that's about right yeah you were a student at utah state university developing an app called pendian what's happened now where you at yeah so a lot has changed like you said i'm no longer a student i'm now a dropout um and now i'm working on a couple different projects yeah yeah figured i'd make a better story um so anyway i started off with those databases and the original idea for pendion had even been something different like months before that so things have like definitely changed a lot so i got these databases together i started approaching google microsoft anyone i could think of that was buying language data and i was just like i don't know reaching out to him constantly like seeing if they would be interested in buying these these data sets um finally talked to one person and they just said oh we're busy we're not interested and like that was it that was the entire conversation um come to find out months later the language that i had like already built the database for kekchi um they released google translate in like couple months after that so it was like a month or two ago that they released um so i i felt kind of jiffed i was like oh man they they got beat me to it i kind of moved on from that space for a little while um and i i decided to start looking into like automating what my my previous job was so i was a trade analyst previously um and i would do a lot of customs brokerage work and there's a specific thing called um hts classifications and that and i started looking into like how can we make automation better cause traditional approaches had been fully reliant on machine learning and studies that have been done on them show that they're only like 70% correct and so there's like a big margin of error there you get this wrong you're getting delayed shipments you're getting fines legal fees so that's really what i went to setting out building and that's what we have right now that i'm i'm mostly focused on and on the side i've picked up the language translation again and actually built um a framework for low resource translating translation i'm pretty excited about yeah that's amazing um i want first well done you know pivot in you know they're just they like to do that but i'm can you tell us about that experience like what was it like when you saw that release from google what did you feel what was going through your head i was kind of bummed obviously cause i've been spending like months like getting data like figuring out different techniques to extrapolate you know all these things and then that got released i was like well that's done i pretty much just like threw in the cards right there i was like well this is my main like and i done some tests with it and it looked pretty good um and so i was like well yeah there's that they've kind of got their foot in the door here so i kind of stepped out of space like immediately when you say stepped out of the space do you mean like what does that mean so i just like i had all the data and everything and i was like going hard like trying to outreach these different companies and and once i saw that i was like you know what and i just start working on other projects start working and just kind of like put on the dock burner for a while yeah yeah got it so this hts hts classification is that what it's called yeah that's right so what is your approach to automating that i'd be really curious to hear hear more about that yeah so like i said kind of traditional approaches are are all about uh natural language processing they take like certain factors of that and have basically like a decision treat um that will classify it so i wanted to approach it more like i would approach the problem as a human classifier and so instead of taking import data in the past because those codes are constantly changing new regulations get passed everything i wanted to get the most up to date data so where was the direct truth there so i pulled all the court rulings from the government database so we got a couple hundred thousand court rulings we've got so in classification it goes by section chapter and then they're also explanatory notes and then you have the the subheadings and headings and so i took all the documents for all that stuff and i gave like i made a couple different databases with all that different information organized and then i just built a way that the uh llm can go through and reference all those different databases and pull in every relevant document and then build up precedent for every court ruling uh based on the court rulings and everything and also attached on like an image to text model or a text to image model or well i got that till i mixed up all right image to text model so i can take in product sheets and it does both optical character recognition and image recognition so it can take in technical product sheets transform that into the information that we would need in text and then from that text then it would go reference all the court rulings and everything so it's kind of a different approach um but based on the testing that we've done it's uh it's doing pretty well so that's awesome so are you working on a were you about to ask the same question jake are you working on a are you working with a team on this or is it just you still yeah so i now have a co founder uh he's a full stack developer his name is tanner and he's up at utah state still and then i've been doing just all like the data modeling and business said so right on right on that's exciting there's there's two of you now that's fun yeah pretty exciting so this hts like what is the use like is this a subscription that a company like a big four accounting firm would pay for and they would just be able to run securities through this yeah so right now the the couple beta tests that we have right now are all logistics firms so um and customs brokerage firms so i guess we have of one customs brokerage firm and two logistics firms and they have people that usually go by hand and assign these codes in and to get like a really good like solid ruling on it basically um you either gotta submit it to the customs and border patrol and wait weeks to get something back where you go by hand and you build up like what the model does and when i was doing this by hand it would take for like the simplest items it would take like 15 minutes for complex items your hour and a half or longer easy and so our model average is about it does 20 seconds but we can spin up as many different um as we can do or we can yeah spin up as many different models as we need and so average is about two seconds or less per item it averages two seconds thank let's just do the hold on you so if we do 15 minutes times 60 seconds that's 900 seconds and you're doing it in two how do we do the math there 2 / 900 that's 1.02% of the time that's nuts ha ha yeah everyone that has been on it so far has loved it um we're just trying to get like an api right now we're trying to get like a shopify app to integrate with all of like these dropshippers cause starting in november there's gonna be a lot of past um that removes the diminimus uh lot right now so right now any shipment under $800 in value doesn't get a tariff code and it's not tariff starting in november one billion shipments per year that fall under that threshold that are imported united states are going to need that and so we're trying to get ready for that yeah i was gonna say so you guys are hitting the market at the right time you got pretty dang lucky jeez so yeah can we do you have can we see it is it something you could share so we can see how it works yeah let me share my screen with you guys and i can show you okay here we go you have three active customers right now that's right okay alright so we're just gonna do it um the image way the csv way is the way that we're averaging two seconds per thing um this one basically you just put in like whatever product code it is you upload the product sheet itself um let me get the demo so these are endo job biopsy forceps for olympus medical and then it's going through our image to text model right now i'll take a couple seconds and then it will pull up basically a way that you can like go through and double check all the information that it pulls so our product is this this is our description this is the material the end use and it's finished good all looks good so i'm gonna do classify and right now it's going through and it's referencing those you know 350 ish thousands um unstructured data documents from the government and it's gonna pull up um build a rationale and then it's gonna give you your codes it's gonna pull in section 3 o 1 so show like what it would be from china and then it will show like all trade agreements for the united states for that item so obviously i've ran it quite a few times but the interesting thing is every run is is independent of its of another one so i can update the data with a completely new data set for a different country tomorrow and it would do it just for that country so this is hds it pulled up 9 o 18 9 o 800 and if we go to that oh we're actually gonna go to the court ruling that it cites there we go so this is a tariff classification of disposable plastic forceps from china so that's what it referenced to make it um and here's the rationale so does the fb 2 1 5 u and o jaw large 2.8 biopsy 4 steps medical instrument made of stainless steel plastic components so it cites gi1 court ruling this one explanatory nodes for chapter 90 and then it gives that classification that's right yeah so actually if we look at other ones it writes it a little bit different every time it does it and so it kind of adds like that little bit of a of randomness in into each one um but it because it goes through in reasons like a human does it's able to do things much it it's more flexible and it's more accurate that's super interesting so if you guys what uh what's kinda like the i guess scoring algorithm you're using to determine the the is it just purely like kind of human preference testing like hey do you guys like this you know generated code or not or is it you have some sort of you know fake scenario that you run it through yeah so right now it's just human preference cause the thing with like these rulings is technically there could be like several that that could work for an item and the classifies job is finding what is most correct and so basically we pulled in a couple different machine learning models that are on the market right now we ran like different items through them and there's some that are obviously incorrect that that the like other machine learning models will do for example like a silicone o ring silicone it's like should it be classified as a rubber or a plastic and what normal people would classify as is the opposite of what the government actually classifies it on based on court rulings and so that's an easy one that we can just be like okay you guys are obviously wrong on all of these ones and you can immediately rule out like they're incorrect incorrect incorrect for a couple examples like that and then for the rest of them we just do human preference testing so we'll pull up a couple different classifications will show the rationale as like an example of like what they can go reference and then give the human preference of of all the different codes that were pulled up and then ours wins nine times out of ten that's awesome yeah i like this i like this application a lot because i think that um it's a good application of language model strengths like there's way too many people who are trying to force language models into like a very deterministic workflow you know where it's like you need the exact same output every single time so like this is a really interesting and really compelling use case where it's like hey there's you know there's a lot of subjectivity in this process already um and so you know the fact that language models are a bit random and a bit subjective is actually a little bit of an advantage almost um when you know given the proper information proper guidance um how long did it take you to get to this point um this has been we ideated this in the spring and then um basically it's been like a slow building process cause i had to learn i did completely redo the way i was thinking about this originally because i was trying to follow like traditional approaches which just are not are not gonna be good especially cause this is an election year and tariffs could be changing pretty dramatically in the next little while um i had to completely like reframe how i was thinking of it and especially with what you're saying about like the subjectivity i was having a hard time getting those deterministic answers for an item consistently and so when i realized okay the rationale itself can change as long as it's citing correct information but the end goal the end classification needs to be the same every single time so how do i get that deterministic answer even while using like something that has like a little bit of randomness in the research phase itself and so getting those things to work together it's been pretty tough but um pretty excited about where it's come you know jeez so this is so it's two people right now do you have additional cost like how are you targeting customers right now so right now we've been live for like less than a week um yeah i just been cold calling so lots of cold calls cold emails reaching out i don't even think we're on like if you look up terrify on google i don't even think you'll find it yet like we're like early early like so it's just been like kind of guerrilla warfare trying to find people to get it but it's been interesting every time i've talked to somebody that knows the issue and experiences it it's like i i don't feel like i'm selling anything like this is literally like we're just streamlining their entire processes you know we're helping them be able to handle much more customers and so the value add is just like so much more yeah this is a perfect example of where ai is taking us i think you know like there there are these tasks that are just a time suck you know and it's not even a huge it's not like a human's gonna be better than this model you know and it's a huge time suck you think of the cost savings if it's taking anyone someone from 15 minutes to two hours for each one of these i don't think anybody's gonna be like oh no ai don't take this task from me right i think everyone's gonna be more than happy to let the ai do its thing and we'll just inspect it to make sure it's right i it's awesome exactly like human in the loop like yeah we still need those people to like double check make sure everything's good um i think that's just an important thing in general you know we i don't think we're to the point yet where we can just like fully give the reins to ai and and trust it but i think at a certain point and it's already like to that point for me honestly with the classifications like um where i can just give it to it and then it will give the answer back and and i i like trust it like i'll go through and double check it cause that's what i'm supposed to do but uh like i know it's it's right and so i don't know yeah that's the hope just to get rid of all of these monotonous especially with unstructured data handling tasks for for humans and and give us more like the high level thinking tasks rather than the actual manual labeled labor tasks yeah that's awesome oh shit is it now just scaling it is there any additional work being done on the data sets necessarily or now is it more like okay we're live let's just get it out there yeah so we've got kind of a couple different things i see this being like a global impact company honestly cause um like the united states is the biggest importing company or country in the world which is why we're starting here and i live here so that that helps you know yeah it's kind of convenient but um really the plan here is we're starting with logistics firms we're moving into importing companies companies that import high volumes of of goods united states looking at like ebay working at these different like sites yeah you know shopify accounts things like that where we can integrate it in and make it seem less to that process and then we're looking at also expanding internationally so you look at countries like madagascar madagascar they have a 50% in accuracy rate in the classifications so the fraud rate basically is ridiculous they're losing 15% of their tax income collected because of misclassified items and so it's like kind of the chicken and egg problem like they don't have enough people that they're hiring to go through an audit these things so they're not collecting more money and so they can't whatever right so i see this being applied to government contracts for emerging market countries that have high levels of fraud import fraud and being able to help them reclaim some of those fraudulent import um like tax dollars and obviously we'll be able to boost their you know economy with this and kind of those companies that have been like skimming off the top or like doing these like illegal things well we'll get caught in fines and then well obviously you know be taken off the top as well a percentage of what we're able to help the government save so i i see this is both in the private sector and in the government sector in in every country in the world so that that's really where we're trying to go it's awesome question no thank you it was a good one so are you planning ah here here is my question do you believe that it's going to be easier to train people to check the output uh rather than do the work themselves oh 100% yeah so like in the output that i showed you guys it has a like the the court ruling at sites and has a link to it you can click that and look and it has a similar item it's been previous classified it's like super easy for somebody to go check that and you can go click on the link for the like the code itself and you can look at that and it's like obviously like the right one you know and anybody can go through and fall through that thing look up everything that they don't know and be able to check that i could have somebody that doesn't know anything about hts codes and get them like up to speed and you know 10 15 minutes of how to do how to use the system and how to go through and double check it so that's like that's another huge benefit is uh yeah just the ability to train people faster it reduces logistics cost for the for the company like internally not just you know not just with their shipping but with their internal company logistics it gets much better that's really interesting yeah i love that you mention something price you said a person you would get potentially a percentage of the money you help the government or company save is that how you're planning on doing it is like a a fee within like a percentage clause of some kind or like how does that how does your pay structure work with this cause it is something where potentially you could be saving them a lot ha ha yeah so that that was specifically for like the government sector uh i think it would be like a cost savings benefit where we could just like on board them off for free get integrated into their systems for free and then you know every audit that they make where they're able to like recover money from that then we would just take like a you know percent fraction or percent whatever um but with companies right now we're doing on a subscription basis so um they pay monthly and they get a certain number of tokens um that goes to like classifications and then they can get more of those as they use more um and then when we're selling to like single item importers we can sell just buy the item um it's more expensive but if you just need like one classification for example like you're importing this one random good that you invented you don't wanna like spend the cost of hiring a broker for $200 an hour then you can just boom get your classification put it in you know get things going so and how did your um previous experience with the the database language you know pendian how has that affected your approach here i think just thinking about not just language translation but like all these problems has got me like excited about just reading papers cause like to be totally honest guys i might not not that smart like i'm not that intelligent but there are people in the world that are wicked intelligent and they publish papers and some of these papers are groundbreaking like world changing like ways of looking at problems that get like five views and you know two of those views are like the person that published like it's all their moms that have looked at it you know and so i think the biggest thing that i've learned to this process is just like finding like these things have already been invented we just need to find like find them find the papers on them find the research on them understand the research that they've done and then combine that with other research that you found and and then you can make some pretty pretty amazing things yeah it's a crazy world out there now the wild west i i was curious and i think that maybe you would mention something about this earlier but with these classification codes when they are sent over to i don't know who handles them us department of state i don't know who handles those department of commerce whoever are those immediately kind of validated i federal government employees or is it just kind of like the honor system like okay like you know we believe that you classify this properly but if we find out later uh you know that you've been classifying it improperly then you're in trouble we're gonna slap some fines on you yeah so it's it's with customs and border patrol and it's basically like if you've ever like left the country and you've come back and you brought stuff and you've got to fill out with your little duty form like what you bring like all this stuff um and sometimes you'll get audited with that so sometimes they'll go through and actually look at like the things that you brought in and make sure that you filled it out right um but that doesn't get checked like insanely consistency i mean consistently to be fair like there are 11 billion products dollars worth of products imported to united states every day and so for them to like go through and manually check all of those i mean obviously it's like pretty insane but the audits that do happen they're like a lot of times there will be fines you have to repay things you can even be blocked from importing if it's consistent got it okay so you're you're sold on the ai space haha oh i love it yeah it's so fun bryce what are you i mean you're this this young ai founder you're building this company are there any ai tools that you personally use or love just like on a day to day basis whether it's for your own personal enjoyment whether it's business related are there any tools that you like yeah so i meant i use actually three different ones for a mix of like research and then just random questions and then um even like speed up coding sometimes so i've got i use you know the new gpt model that was just released like the 0 1 preview and i use the claud 3.5 i love the claud 3.5 like long long form and like reasoning uh it does really well and then i use perplexity for most of my research and perplexity basically just like you know the vector store of the entire internet that it pulls in so i love that yeah those those are solid the big big ones have you used cursor at all for coding i actually haven't you should check it out um there are people who are bigger fans of it than i am but it is very cool uh and really nice it it just gives you you know access to chat gpt and sonnet code completions like in your it's basically vs code with with you know code completions so let's do this venture you also you also mentioned that you have another thing in the works um kind of calling back to your work with low resource languages from from earlier this year i'd be really interested to hear more about that as well yeah so kind of what got me into the low resource languages is i i lived in guatemala for a year as a missionary um i loved love those people um it was difficult for me being there cause you would see kids reading in spanish fluently but they didn't understand spanish because in school they would learn how to read spanish and not their native tongue and so it was um my goal there is is you know by the end of my lifetime i'm hoping that there are book clubs with you know lord of the rings and catch you like i just think that would be so cool you know um and so i had this database collected right i had like 100,000 language pairs that i had like put together and gone through and like validated and everything and so a couple weeks ago i actually got taken to lunch by by a translation company and that it known i was working on this thing previously and they were like hey you should pick this back up like we wanna partner with you or we wanna like you know potentially be a customer for you and so i kinda like lit a fire under me i was like you know what i think i could do this like and with what i learned building terra flow i actually approached the problem in a similar way that i had approached her flow which was completely different than traditional neural machine translation like ways of going about the problem and so i didn't know how it would work honestly i just kind of on a saturday i just took like you know the entire day from the morning till night and was just working on this thing um i started testing it and i was like holy smokes like this is good like i know catch you pretty well but i'm not like the greatest and so i wanted to like validate that it was like actually good so i translated like the first page of lord of the rings and then i translated a bunch like medical text and then some legal text and i sent it to a couple of my buddies that are in the translation space and then my mission president actually still lives in guatemala and i was like hey like is this good and the feedback i got from all of them was like these translations are are are perfect maybe i would change like one word here and i was like all right like this is it like cause traditionally you need like 10 million link translation pairs to to do like a good neuronition translation model at a baseline but with difficult languages like kekchi where it has like weird grammatical rules and conjugations and everything i didn't even think it was gonna be possible with 10 million and so now that now i got this i'm like man this is sweet so i did it in spanish as well just to like prove in a higher resource language that the same approach could be used and then right now i'm working on navajo key che a bunch of and like 60 other low resource languages so that's fascinating uh and so obviously i'm sure that you don't wanna give away too much of the the secret sauce here um but i'm really curious to hear about like and yeah the procedure you've developed and again oh my god you feel like you need to sit on for now and keep putting fully understand will not be offended yeah so i guess i won't say too much but but the one thing i will say is a network of agents approach to solving complex tasks um yeah that that that's kind of all i'll say it is fascinating how much that's coming up recently like i feel like when chat gpt first launched and with chat gpt 3.5 and and 4 um agents were not ready at all uh like the ones that you know were hypothetically possible we're gonna be way too expensive to run um but it feels like more and more like just incremental improvements in the models have made agent approaches a lot more reasonable and i also think that there's a lot more i'll do it building agents now as well yeah cause i've heard basically that exact same answer from from a lot of friends recently on you know when i asked them like hey how did you solve this problem it's terrible yeah bryce for our listeners can you just give like a high level summary of what the network of agents approach is so basically like think about it as like a corporate team right you've got the ceo and it directs like the vps of or like you got your c suite that directs like the vps of like each individual department and then they direct like managers and those managers direct like individual kind of like workers that work on like specific tasks um so by combining like all of the information from the entire organization um and funneling it to the top you're able to achieve like much more than one person would be able to do on their own so using the same approach with which with large language models will it it improves drastically um the outputs that's super exciting yeah have you ever thought about what on some techniques that you use i've definitely thought about it um i don't know i don't know if i will uh at least not yet i i think it's just a new right now i just wanna like build as much as i can but to be fair the amount of people in the world that they care enough to build something and then try to like monetize it and you know i i think it's like pretty pretty low so i don't know maybe in the next couple years i'll write a paper on it but for now just kind of sitting on it nice nice keep our eyes out for it yeah with these low resource languages though with this company you know assuming this approach keeps working you're able to turn these out what like what are you gonna use it for like what how are you gonna approach this how are you gonna put this into actual use in circulation so this is definitely more like humanitarian side like these languages for the most part aren't gonna be like super lucrative like there's not like lucrative advertising money or whatever you know these people are are indigenous like poor um so the idea right now is well both helping like organizations that do high volumes of translation to these lower resource languages you think like the eu you think the church of jesus christ you think of jehovah's witnesses um that have a lot of different people like in these different groups that they wanna be able to when they make like a publication or they make a statement where they make a you know there's like a speech they wanna translate it into these languages and so being able to have it instantly and accurately translated for all their people like that that becomes very beneficial um as like application a lot of people also in these languages prefer like audio visual you think like it's like the early days of youtube for indigenous language speakers right like there's a couple things in their language and not very much getting like is out there for them and so that's why i'm looking to partner with um some text to speech uh either companies or individuals um that are able to do low low resource language text to speech and then we can do voice cloning of like the speaker's voice and so we can automatically go through and dub like youtube videos or you know whatever it is into those people's target language um because you know with literacy rates as low as they are for these languages like honestly like videos will be more impactful that's really interesting it's gonna be go ahead jake i was just gonna make a yeah i was just gonna say i just love like both of your ventures right now are from your background i and i think that's a trend where we're seeing the most success i mean i think this is in general but also in like the ai space like you're able to ideate and then act and ship things so fast if you're familiar with the space just because there's these tools available to you where you can actually create something and i just i mean one how cool is it that you're building this startup for hds security you know automation and then you have this on the side and you're able to actually do that i think that speaks to your business models yeah um anyway i just think it's cool thank you yeah i love the space definitely like the perfect time of the world right now for being able to do these things like 5 years ago this would have never been possible like it would have taken you know a lot of people working together with like large amounts of data data collection specialists everything to make this possible um so i honestly just feel blessed like these tools are available now um and i just think it's fun to mess around with them have you seen that uh xkcd meme about how to about identifying birds we pulled up cause it is exactly what you just said that's sweet i haven't seen it i'm excited we use it takes photo the app yeah anyways you guys can read it but yeah exactly what you just said people said all this stuff they didn't go with them and mlai but like you said five years ago was literally completely impossible look oh yeah um let me you know let me rig up a an app real quick that is capable of identifying any bird in the world by its head you know like it's it's insane that the force multiplier that ai has become so quickly um cause for so long it just felt like it was like you know it's like oh hey like here's a couple cool things it can do um it's gonna take me three years to put it into production haha um and now it's uh there's it's just it's so fast to get out there um you've built a literally from ideation a company in product that's just it's amazing uh bryce what we just speak to so many founders and i think a lot either have been are currently or will be in a similar spot as you were when google released kechi and you had to pivot what advice would you give to those who either are currently going through that or especially as these giant companies get more and more involved it'll happen more frequently so you didn't like it and have that their legs swept out from under them like that i think if you can build something and you've learned like that process like you can build something else like very quickly just like applying your learnings like other aspects finding areas that are gonna be untouched and then finding areas that these these like in the future building things that that won't be overtaken by you know gp gpt10 or whatever um think about like how long it takes to train something how you can get data that's up to date and what area is like the up to date data is like crucial for and close the loop on your models of like your data collection um i don't know i think there are a couple different approaches that you can take but i would just say like keep learning like that's the biggest thing right now you know as we keep learning it's like that's the exciting part so i mean if your i didn't work out go come up with 10 more you know hashtag gbt for a couple more something that i wanna point out with ai is that i feel like with each step of learning really what you're doing is taking two or three steps because you take one step and then you know you're able to use chat gpt to generate more complicated code or you know perform more complicated automations etc um and it's just it's gonna be insane when you know five years from now i can spin up a mini spencer and i'm like i don't want to work today like i'm gonna make mini spencer do it oh my god exactly price you said something super interesting you said try to identify something that you know won't be taken over by chat gpt or whatever model or company large company um i'm curious if you have any thoughts on how to identify a need or gap needing fill like how how do you do that how do you identify something that won't be taken over in the short term so i think there are a couple different things you have to look for um one like i mentioned that that has dated updates consistently i think model training times take months what's something it changed over the last couple months that your answer would change between you know that amount of time um so i think that i think another one is noise so because of the way that language models works your your scholarly articles are getting as much weight often times as some dude on reddit and so how can we make sure that the data that it's referencing and outputting is like the quality data that we're looking for and how can i make that more definitive rather than like take out the guesswork of it so i think there's that part of things especially as models are getting trained on more and more synthetic data i think that will become more and more crucial like the the predictions will become better and like the the next word predictions for it will become like super good but there's gonna be so much noise in the data that it's gonna struggle at certain tasks and so if you can really dial in under a specific niche that you can provide the data to at four then you're able to improve things quite quite a bit i think and have an edge so i'd say those two things it sounds very familiar to a lot of what we've heard from people which is the like you know that there's been a lot of over the last 15 years you know there's been a lot of talk about like a data economy and information economy and i think that we're actually finally hitting that point with ai because previous is like yeah yeah sure we had pedobytes on pedobytes on pedobytes of data but like only you know only terabytes of it was really useful um and now we're finally hitting the point where we can make the rest of it useful as well um it's yeah it's gonna be amazing haha yeah it's super exciting bryce what do you think the uh if you i mean thinking about the last six months where you've been you're here now where do you see yourself in six months with these different ventures things going on um so i think i'll still be working on both of them um but i've got another one that i'm like super excited about and so i'm probably gonna be doing that on the weekends now um yeah it's a curse i need to like focus on on one thing so be building up a sales team for the one i just think i need to balance like the technical and like the business you know it's like meditative for me we'll bring you on in 6 months so you can talk about your your third initiative whatever whatever that is haha yeah i'm super excited about that actually um they're like terabytes of like i was talking about those those articles like phd dissertations articles at the archive from cornell they've got terabytes of that information and so my idea and i've already built out like how it's gonna work i just need to like get the money to do it but um it's just applying the same approach of a network of agents to information retrieval from those and be able to do like long responses so instead of just like like you know 20 second response time you can have it searching for months and finding like new approaches to like problems or new like if you wanna do low resource language translation it will go find all of like the articles about low resource translation but also look at ai articles also look at you know any other applications and other spaces and try to move those into your space um so anyway that that's what i'm hoping for in the future but alright stay tuned world become alright um jake are there any other questions you were wanting to ask i think that i've covered all mine yeah i think i'm yeah now thanks for coming back on this was this was a lot of fun to have you back and just for listeners if they wanna you know follow this journey of yours you're turning out cool ideas and companies uh what are the what's the best way for them to follow you yeah so my linkedin is at rice judy b r y c e g u d y and then you can see our website terraflow com and you can follow terra flow on linkedin as well it's t a r i f f l o um so that'll probably be the best way to stay in touch perfect thank you so much for coming back on it was it was great to have you yeah thank you guys we'll stay tuned for the next next uh iteration of everything yeah if i make