Ani Anirudh Patel Semiotic Labs artificial intelligence machine learning The Graph crypto web3 Sandia Labs Stanford

GRTiQ Podcast: 144 Anirudh Patel

Today I am speaking with Aniurdh “Ani” Patel, Senior Research Scientist at Semiotic Labs. Long-time listeners might recognize Ani from a previous appearance on our podcast, Ep. 106, where he, alongside his colleagues Sam and Tomasz, explored into the intriguing crossroads of AI, web3, and The Graph

I am delighted to welcome Ani back for a comprehensive interview, where we explore his professional and educational journey, including learning about Ani’s passion for travel. We then talk about web3 and crypto, tracing the path that led Ani to Semiotic Labs – as you’ll hear, Ani’s entry into The Graph ecosystem shares a familiar thread, as he, like some other guests, made his introduction through Sandia Labs. Our conversation also weaves through many great insights into artificial intelligence and machine learning. Towards the interview’s conclusion, Ani unveils an exciting development: Semiotic Labs is launching an innovative LLM product called AgentC.

The GRTiQ Podcast owns the copyright in and to all content, including transcripts and images, of the GRTiQ Podcast, with all rights reserved, as well our right of publicity. You are free to share and/or reference the information contained herein, including show transcripts (500-word maximum) in any media articles, personal websites, in other non-commercial articles or blog posts, or on a on-commercial personal social media account, so long as you include proper attribution (i.e., “The GRTiQ Podcast”) and link back to the appropriate URL (i.e., GRTiQ.com/podcast[episode]). We do not authorized anyone to copy any portion of the podcast content or to use the GRTiQ or GRTiQ Podcast name, image, or likeness, for any commercial purpose or use, including without limitation inclusion in any books, e-books or audiobooks, book summaries or synopses, or on any commercial websites or social media sites that either offers or promotes your products or services, or anyone else’s products or services. The content of GRTiQ Podcasts are for informational purposes only and do not constitute tax, legal, or investment advice.

SHOW NOTES:

SHOW TRANSCRIPTS

We use software and some light editing to transcribe podcast episodes.  Any errors, typos, or other mistakes in the show transcripts are the responsibility of GRTiQ Podcast and not our guest(s). We review and update show notes regularly, and we appreciate suggested edits – email: iQ at GRTiQ dot COM. The GRTiQ Podcast owns the copyright in and to all content, including transcripts and images, of the GRTiQ Podcast, with all rights reserved, as well our right of publicity. You are free to share and/or reference the information contained herein, including show transcripts (500-word maximum) in any media articles, personal websites, in other non-commercial articles or blog posts, or on a on-commercial personal social media account, so long as you include proper attribution (i.e., “The GRTiQ Podcast”) and link back to the appropriate URL (i.e., GRTiQ.com/podcast[episode]).

The following podcast is for informational purposes only. The contents of this podcast do not constitute tax, legal or investment advice. Take responsibility for your own decisions, consult with the proper professionals and do your own research.

Anirudh Patel (00:00:18):

And one of the things that Sam sold me on was just that, hey, at web3, the blockchain, we don’t have a sim-to-real gap. Any sort of RL that we do, any server enforcement that we do, you can just apply it immediately.

Nick (00:00:58):

Welcome to the GRTiQ Podcast. Today I am speaking with Anirudh Patel,, Senior Research Scientist at Semiotic Labs. Longtime listeners might recognize Ani from previous appearance on the podcast, Episode 106, where he, alongside his colleague, Sam and Tomasz, explored the intriguing crossroads of AI, crypto and The Graph. I’m delighted to welcome Ani back for a comprehensive interview where we explore his professional educational journey, including learning about Ani’s passion for travel. We then talk about web3 and crypto, tracing the path that led Ani to Semiotic Labs. And as you’ll hear, Ani’s entry into The Graph ecosystem shares a familiar thread, as he like so many other guests, formed his network to The Graph ecosystem through Sandia Labs. Our conversation also weaves through many great insights into artificial intelligence and machine learning. And towards the end of the interview, Ani unveils an exciting development. Semiotic Labs is launching an innovative LLM product called AgentC. As always, we started the discussion talking about Ani’s educational background.

Anirudh Patel (00:02:06):

Yeah. So I’ve always been interested in math, especially the applied aspects of math. And naturally, this is going to sort of point you towards STEM fields. So for me, it was always engineering or science. And so, in college I took a bit of time to decide the advantage of college of courses that you have, a million different courses that you can take and you’re always just struggling in your first year to figure out, “Okay, what should I take, what should I do?” And, the course that actually really captivated my attention was a course on signal processing. And I got my bachelor’s and my master’s from Stanford in electrical engineering, in signal processing specifically. And eventually, I ended up pivoting from signal processing into AI. But I really think a lot of the mathematics, a lot of the problem solving techniques that I learned from signal processing really helped me out within my AI career. There’s a lot of overlap between those two domains.

Nick (00:03:01):

Ani, we’re going to talk a lot more about your interest in AI and, of course you’re with Semiotic, which a lot of listeners will already recognize as the leader in AI within The Graph ecosystem, but also contributing a lot throughout the web3 space. I do want to ask this follow-up question related to applied mathematics. And so, that might be a term that listeners aren’t a hundred percent familiar with or what that means. How is applied mathematics different than just mathematics generally?

Anirudh Patel (00:03:26):

So applied mathematics or I don’t know how to say this without it being a technology. So let me say what abstract, more researching mathematics are. So in researching mathematics, you’ll have things like category theory and set theory, which are very high level abstractions. You’ll have a number theory problems where you’re like, “Okay, what types of numbers can we form?” So, we have say positive and negative numbers. We have real and complex numbers. There are also normal numbers and there are other types of numbers that people come up with. And so, there are interesting problems in all of these domains as well.

(00:03:59):

So, applied mathematics is really focused on what can we take from all these abstract concepts that mathematicians have developed and use in the real world today? Whereas abstract mathematics, you can think of it like theoretical physics, right? It’s all chalkboard stuff. And if it doesn’t end up having any application in the real worlds, then so be it. I mean, that said of course, if some of your listeners or developers, you might’ve heard of functional programming, functional programming is entirely based on the abstract math discipline of category theory. So a lot of times, these two things do find their way into everyday practical use.

Nick (00:04:34):

You mentioned growing up that you were inclined towards STEM and it led you on a path educationally and ultimately in your career. What other things were you interested as a young person? What hobbies interested you?

Anirudh Patel (00:04:47):

My main passion I think was traveling. I think that comes with the territory with my family. We always had this sense of [inaudible 00:04:55], so I used to travel a lot as a kid. And within myself, this seems to manifest in a different way from the rest of my family. I think most of my family, they just like to go on trips, explore the history, explore the architecture, stuff like that. But, I was always very interested in immersing myself into local cultures. So even when I was a kid, I used to try to learn the local language before I would travel to a place, try to read up on the history or listen to local music and stuff. And just so that when I got there, I was able to communicate with locals, sort of understand their perspective a bit better, broaden my worldview. So yeah, I mean the main hobby for me was always, it was traveling, but in between trips, I was constantly doing something to prepare for my next trip. So, that was how I spent my time.

Nick (00:05:44):

What’s the coolest place you’ve traveled to? What’s the one thing everybody should see?

Anirudh Patel (00:05:48):

I think it depends on your personality, different people care about different things when they travel. But for me, I’m especially interested in history and I don’t think you can really match Egypt, just in terms of the length of time for which Egypt has existed. So of course, before I went to Egypt, I read through travel journals from travelers across millennia that had visited Egypt and tried to get a feel for how they looked at things. And one of the coolest things for me was being on the Nile, and as we’re floating along the Nile, looking off to my left and right, and seeing these sites that people have talked about in journals that are 2000 years old and just having that experience about this civilization’s been around for so and so, so long.

Nick (00:06:35):

Well, another question I like to ask people that are well traveled like yourself is about food and drinks. So, what’s the best food or drink you’ve tried in some of your travels?

Anirudh Patel (00:06:44):

So being Indian, I’m sort of partial to spicy food and there are probably a lot of Indian dishes I could say. But oftentimes, I think when we’re traveling we get to this point where we’re just like, “We just want a meal with spices, please.” And so, in the Middle East, one of my favorite things to eat is falafel actually. And especially, some of my earliest memories of the Middle East is just being in a really cold Jordanian nights and finding some street vendor that’s selling hot falafel. There’s nothing like it, but I think the best variants of falafel actually belongs to Egypt. It’s something called tameya. And in traditional falafel, they make it with chickpeas. And tameya, they make it with fava beans that just gives it a creamier texture. And then typically, or the version of tameya that I prefer, let’s say, is one that’s coated in coriander seeds and crushed walnuts before they deep-fry it.

Nick (00:07:41):

Yeah.

Anirudh Patel (00:07:41):

And I mean, it’s just the textures, the flavors, everything. It’s amazing.

Nick (00:07:47):

I’d be curious to know then where you haven’t traveled, that’s still on your list. Where’s the place you’re hoping most to go next?

Anirudh Patel (00:07:55):

Oddly enough for having lived in the Americas for most of my life, I’ve not been to South America really. And so, I definitely want to experience more of South America. I mean, I speak Spanish, so there’s no sort of problem jumping over there. I think Machu Picchu is extremely high on my list still. I would love to do the hike up to Machu Picchu, although I would want to make sure there are hotels along the way. I’m not too big into camping, so we’ll see about that.

Nick (00:08:24):

Let’s go back to your education a bit here. So if we go back in time, do you remember what it was that interested you initially in electrical engineering?

Anirudh Patel (00:08:32):

If you remember, the part of the electrical engineering that I really enjoy is actually signal processing. And when I took this introductory class, I had this preconceived notion about what a signal might be. So you can think about music or audio signals, you can think about images as signals, you can think about electrical signals. These are all common things, but what I discovered is that signal processing and the techniques that apply to signal processing are so much broader than that. So, you can think about DNA as a type of signal, you can think about traffic patterns as a type of signal. And all of the techniques that you learn that apply to, say music, also apply to traffic, and to medical images and to a lot of other spaces. And so, in a sense, this is also why I’ve always liked math. I’d say, I could take this very abstract concept, I can learn one thing and I can apply it in a hundred different areas.

(00:09:25):

And, signal processing to me was very much the same. I could learn this one higher level concept, and of course, it has its most natural applications, but then you can apply it in a million places. I mean, one of the managers in my last job who led a signal crossing department, he actually had his PhD in anthropology. So, that just sort of goes to tell you a variety of types of things you can apply this to.

Nick (00:09:46):

I’ve never heard anybody frame the term or concept signal the way you just did, but super interesting and I’m glad you took a moment there to share that. And, I see that it’s true. I mean, it’s a fascinating way to think about signal and to equate it to all the different experiences we have. Let’s connect another dot here. Again, going back to your education, this moment in time when you get interested in AI, when did that happen and what were the circumstances, or what was going on at that time for you?

Anirudh Patel (00:10:15):

So to set the scene a little bit, around 2017 within signal processing, I was focused on medical imaging applications, specifically around something called 3D ultrasound computer tomography. But in general, I was keeping up with the field of medical imaging more than anything. And at that time, there started to be these papers that came out within AI that had these remarkable results. So, they’re classical signal processing techniques for detecting interesting features in medical images.

(00:10:47):

So you have an X-ray and let’s use a pertinent example, you want to detect whether there’s pneumonia in this chest X-ray. There are classical signal processing ways to do this. And there was this paper, this AI paper that came out, called CheXNet. And, the researchers in that paper were actually able to get superhuman performance. They were able to beat radiologists at their own game, at least within this very specific domain. And so, this was just an extremely exciting frontier, I think for me to delve into at that time, because I could see how it was doing better than what I’d already learned.

(00:11:21):

But at the same time, so for example, for the technique used for this pneumonia detection in the AI paper was called a convolutional network. Convolution is actually a signal processing technique. So, the stuff that I learned already were being applied in this very new and interesting way. And, beating what I’d learned. So, it felt very natural for me to shift over and try to get a better understanding of that domain.

Nick (00:11:43):

So you then start formally pursuing education and career in AI, and it all started with this medical signaling that you discussed there. Am I right in understanding this was a little bit of a pivot for you maybe, in your educational journey, pivoting away from electrical engineering and more of a focus on AI? And if I’m right, I mean, was that a hard decision to make, or was that something that you welcomed or were excited about?

Anirudh Patel (00:12:07):

It was definitely a little bit of a pivot. It wasn’t a super hard decision to make. So the professor that I worked with during college, he wasn’t my advisor, but I spent more time with this professor than with my advisor, honestly. He had a 60-year-long career in doing voice recognition, speech recognition using signal processing techniques. And, he’s like one of the most cited authors in this space. He’s incredibly well-respected in that domain. And, he was in his field looking at all these AI techniques coming out and seeing how well they were doing. And, he was super excited. So in fact, he was the one who got me started in thinking about AI in the first place. And I think with his supports, it was pretty easy for me to make that switch. My degree program didn’t care that much that I was doing AI stuff, instead of signal processing stuff because like I said, all of the underlying math is the same. And anytime there was a little bit of a conflict, my professor was able to work with me and help me get that figured out.

Nick (00:13:10):

I want to go back to this medical imaging and AI. And so, I’ve read news and listened to podcasts about the utility of AI and where it might creep into our everyday living. Of course, you and some of your colleagues join me for a very special episode of GRTiQ Podcast, where we explored AI and crypto, but the application to medical imaging is incredibly interesting to me. One, because I think humans are just unfortunately very flawed and I think they miss things. And so, the idea that AI could be trained to participate in this arena is very interesting to me. Is AI better? I mean, is that a fair assumption? And, what does the future look like here on something like this?

Anirudh Patel (00:13:54):

Yeah. Is AI better? I guess, the answer is, it depends. Let me back up a step here. There is a vast, vast intersection between medical imaging and AI and of course, an even bigger intersection between medicine in-general and AI. So, we can talk about topics like denoising or scan quality enhancement. I think a good place to serve my view with that paper that I was just referencing, CheXNet. It’s sort of a cornerstone within the field. It has like 2,500 citations or so. And yeah, like I mentioned before, this paper, one of the great aspects about it is actually, that it’s not only outputs whether or not it thinks that the patient has pneumonia from looking at the chest X-Ray, but it’ll also highlight the features, the relevant regions of interest. And so, maybe it has superhuman performance, maybe it doesn’t, at least on the dataset that they tested it on, it does.

(00:14:53):

Of course, the real world is crazier than that. People mess up the settings on their X-rays, and you’ll get slightly different images and that might throw off the AI. So I don’t think we’re at a stage in which we should say, “Let’s just go with the AI and forget about the radiologist.” I think techniques like CheXNet, which will highlight the regions of interest and help radiologists focus their attention on the interesting parts of the image, are probably more useful. In the long term, who knows? But I think for now, I definitely… If had to get a chest X-ray for some reason, I ran it through CheXNet and I went to a radiologist, I’m going to trust the radiologist.

Nick (00:15:29):

Fascinating answer and I’m sure the story is far from finished here, and as AI gets better training and we get, I guess, better signal from the humans and the machinery, perhaps that story shifts over time. So interesting guy, you’re studying a lot of really complicated things. To people that are non-technical, your applied mathematics, electrical engineering, AI, medical imaging, a lot going on here. What did you do upon graduation?

Anirudh Patel (00:15:56):

After I graduated, I joined Sandia National Labs, specifically in a group that focused on low powered signal processing applications. They would build devices that they would want to be put out into the field for 25 to 50 years. On the battery alone of that device, it should just survive. But like I’ve been saying, there were a lot of interesting AI techniques that were coming in, and augmenting signal processing techniques or placing signal processing techniques. So, I was actually really brought onto the team to help out with that transition. So I spoke the language of the other signal processing engineers on the team, and then I also spoke the language of AI, so I was able to help us to start to move in that direction.

Nick (00:16:38):

Incredible. Another Sandia alum here on the podcast and listeners that follow the show know that we recently featured Mickey with Edge & Node on the podcast and she talked about her time at Sandia. And there’s been other prior guests that also came from Sandia, so I’m starting to think that Sandia is another one of these common threads in the storyline of contributors within The Graph ecosystem. Talk to us about then moving into Sandia. What was it like going to work for the government? Is that job or an environment that you enjoyed?

Anirudh Patel (00:17:07):

I don’t really think it would surprise anyone to hear that there was, let’s say, some level of bureaucracy involved at Sandia. But for me, the experience was really amazing. I think part of that was that my manager, he saw his role to really shield us from as much of the bureaucracy as possible. And so, I had friends that worked at Sandia, who were dealing with a lot more, maybe two or three times paperwork that I had to deal with. And so, I think my manager made that a lot more enjoyable for me. And then I think because of that, in other teams that I’ve led at Sandia and I tried to emulate that behavior.

Nick (00:17:46):

Right.

Anirudh Patel (00:17:46):

Whenever, for example, I would lead research projects, I would try to take care of most of the interactions that we’d have to do with governments. Actual government employees, Sandia is technically a contractor, so just removing that stuff. But in every other way really, Sandia is just an amazing place to work. It’s almost like a university environment. There’s an expert in every single different place that you can think about. So I mentioned research projects that I led, one of those research projects my team had on it, an evolutionary biologist, someone with their PhD in a hyperspectral imaging, and then a game [inaudible 00:18:23]. So these are three very, very different fields, but Sandia has expertise in all of them. And, they have people that we can rely on for all of them. And so, in that sense, Sandia was just an amazing place to work. You really got to interact with everyone. And if you ever found a problem or you’re like, “I don’t know what the next step is,” there’s always an expert who is able to guide you.

Nick (00:19:15): And am I correct that during your time at Sandia is when you got started at least working on or exploring reinforcement learning. And if I’m right, talk to us a little bit about what reinforcement learning is and what were you doing at Sandia on the topic?

Anirudh Patel (00:19:42):

Yeah, so I did get started with reinforcement learning at Sandia. I had a bit of experience from college, but honestly, you learn stuff in college and then if you don’t use it, you forget it. Yeah, so it was a bit of a rediscovering for me. So traditionally, when we think about making an action, we think about it in three separate phases. First of all, you try to sense your environments and you think about the information that you’ve collected and try to do some analysis of it.

(00:20:07):

And then, the third step is choose some action to maximize your utility, maximize your reward, whatever you want to call it. People in DEFI or MEV might think this sounds familiar, because we use similar language in the web3 space to talk about intents. So, reinforcement learning is actually teaching a computer how to think and act. So maybe let me illustrate this by describing a problem that I worked on with some collaborators at Georgia Tech. So let’s say we have some forest fires that we’d like to put out and to do this, we are going to have a team of drones. So, there are two types of drones. There are perception drones, and then there are action drones. The perception drones have excellent sensors. They’re able to look at the fire, they have prediction algorithms that are able to predict maybe how the fire is going to spread.

(00:20:50):

And then the action drones have, let’s say worse sensors, but they have fire retardants. They can fly over and they can actually put out parts of the fire that might be getting out of hand, let’s say. And so the question is, “Okay, we have this team of autonomous drones, there are two different types of autonomous drones, how do we want to coordinate these two types of drones?” And so, this leads to a whole bunch of interesting questions. So we have, for example, the perception drones, where they communicate with each other, what information should they be sending? So, that they can get a clear picture about the overall states. Then, is that different from the information that the perception drones should be sending to the drones with the fire retardants? My personal interest is in, so this is a section called multi-agent reinforcement learning, multi-agents, multiple drones. They’re multiple individual actors making actions.

(00:21:35):

And, a lot of the problems here are coordination problems. And in our case, we were dealing with a fully decentralized problem. In a lot of traditional problems, before reinforcement learning especially, you would call back to a central planner and that central planner would try to figure out, “Okay, here’s what all we are going to do.” In our case, this was a fully decentralized system. So each drone got its own core vision of the environment, its own limited vision of the environment. And the messages that it received, that it would have to work in a way such that it didn’t impede any other drone and they were collectively able to make, let’s say, a good plan of action. So, this is how I think about reinforcement learning. So if you want an AI that can detect the money from chest X-rays, that’s computer vision, that’s a different branch of AI. If you want an AI that can take that result and decide what treatment plan to enact, that’s reinforcement learning and this is why reinforcement learning is so valuable.

Nick (00:22:29):

I love that example. And I do want to ask this follow up about, how complicated working on these problems are? So for example, as a non-technical person, I don’t have a strong background in math. Some of this is conceptual. I mean, you laid out the way the three phases of making decisions and that’s conceptual. Then you talked about trying to figure out the way these drones would interact and address these fires, that’s mostly conceptual. So the point I’m trying to make here is, are non-mathematic, non-technical people able to participate in this machine learning type of environment, where you’re exploring these things? Or is it really more complicated than I’m perceiving?

Anirudh Patel (00:23:09):

Yeah. So there are, let’s say, two types of AI. One type of AI is computer vision. Computer vision is a pretty mature technology at this point. People have been researching, let’s say, [inaudible 00:23:21] computer vision for a bit over a decade. And so, people who are entering the computer vision space can oftentimes just take a pre-trained convolutional network off the shelf and apply that to their application. Or you can think about the LLM’s that have come up recently, a similar thing. It’s a pretty mature technology. You don’t really have to have the mathematical background. You don’t have to understand how these things necessarily learn or store information on a fundamental level, in order to be able to apply them. Reinforcement learning isn’t like that yet. Modern reinforcement learning is still a lot younger. It’s like 2017, which in AI years is not very long, even though a decade is somehow a century.

(00:24:02):

And so, in the place where we’re currently at in reinforcement learning, you do need that mathematical background, in order to be able to contribute. Especially in multi-agent reinforcement learning, a lot of times we’re dropping into game theory. A lot of times we’re doing advanced calculus or advanced linear algebra technique, so it does get a bit more confusing.

Nick (00:24:21):

Makes sense. I appreciate you explaining that. So, let’s explore another takeoff point here. So we’ve got a pretty good sense here of your career track, your education, your background. At some point, you become aware of crypto. So, let’s go back in time there. When did you first become aware of crypto and help us understand what some of the first impressions were?

Anirudh Patel (00:24:40):

So unfortunately, the first impressions weren’t great. So I first heard about crypto in college, probably, I don’t remember the year at this point, but when Bitcoin was randomly shooting up. And at the time, I’m a student, I’m too busy, I don’t really have the time to look into it. And to me, it was just like, okay, so this is just basically another currency on Forex. It’s just another vehicle for wealth creation, fine, whatever. And that was just sort of where I left it, until I revisited it a lot later in my life.

Nick (00:25:11):

Ani, as I’m sure you can imagine, and again, longtime listeners know, most guests of the podcasts first exposure to crypto was through Bitcoin. And, it is the perception that this is something of a speculative asset. And then over time, of course, they become aware of something about the underlying technology and I’m sure we’ll explore that a lot more. But before we get there, I do want to ask then, how did you get connected with The Graph, Semiotic Labs and this early discussion about potentially leaving Sandia and exploring a career in web3?

Anirudh Patel (00:25:45):

So, you earlier mentioned other Sandians that you’ve interviewed. Of course, Sam Green, co-founder of Semiotic, and our head of research also used to work at Sandia. In fact, he has a PhD in reinforcement learning, and then branch called efficient free reinforcement Learning. And somehow, we’d never met while at Sandia, even though we worked on the exact same project at the same time. But I think we were just working on different aspects of the project, so we didn’t really end up collaborating. But anyway, at some point, Sam left Sandia to start Semiotic and he contacted our PI, our principal investigator on that research project that we were both on. And asked our PI, who was also a reinforcement learning researcher, like, “Hey, do you have any interest in joining Semiotic?” So actually, our PI forwarded my information to Sam instead and said, “You should probably speak with Ani.” And yeah, that was that. Sam and I had a few chats.

(00:26:37):

I spoke with some other people at Semiotic. And shortly after, I left Sandia to join Semiotic. At the time, Semiotic was already a core dev in The Graph, so this was my introduction to The Graph as well. I hadn’t really thought about crypto while at Sandia, really after that first exposure. And so, to understand why I was interested in joining, you have to understand a little bit more about reinforcement learning. In reinforcement learning, we have a problem called the sim-to-real gap, the simulation to reality gap, which is that, we train these algorithms in simulation. Like let’s say, we train in a simulation of a drone flying, and then you try to put that in a real drone. And let’s say, there’s some wind on that there, or the mass of the drone is off 2% or something. And this totally throws these things off.

(00:27:23):

These algorithms, this is why I say it’s a less mature technology as well, than computer vision or LLM’s or something. These algorithms are just not ready in any real way to be deployed in the real world. And one of the things that Sam sold me on was just that, “Hey, web3, on the blockchain, we don’t have a sim-to-real gap. Any RL that we do, any reinforcement learning that we do, you can just apply it immediately. And the sim-to-real gap for me, it was never my interest within reinforcement learning. So that aspect of just not having to worry about the headache at the back of my mind and every research project, was what ultimately sold me.

Nick (00:27:59):

Well, Ani, this sim-to-real gap, I think it’s the first time I’m hearing of this, but it is a light bulb moment for me, why somebody like you, who has the background that you have and you’re working at Sandia on interesting problems, why the opportunity to make a move and eliminate that gap might be of interest and super interesting. So when you made that move, did it feel like career risk? Did you think in your own mind, “I’m kind of making a gamble here, and I’m moving into this emergent space with this startup working on this thing called, The Graph?” How were you thinking about all that?

Anirudh Patel (00:28:36):

Yeah, it was definitely a risk. Especially, again, I hadn’t really thought about crypto for a long time, so it was just totally new field, totally different from the applications that I studied, let’s say, within college or even what I learned at during my time at Sandia. Of course, the underlying math is always the same. It’s one of the great things about math, like I was saying before, but there’s still a different domain.

(00:28:57):

I also had other concerns. So at Sandia, I was at the time, leading teams of experts, teams of experts with a variety of backgrounds. So I was in this constant mode of learning from people who knew a lot more than me, which is always an exciting place to be. And at Semiotic, we don’t have that many employees. So a lot of the times, when I was working on projects, it would be just me, or me and one person or something. So, I didn’t have the same sort of collaborative research like environments. Along the research thread as well at Sandia, I was a researcher working on more fundamental questions. So it was a lot of math, not necessarily very applied, especially towards the end of my time. Whereas at Semiotic, I would have to move towards something that was more product focused. We want to bring value back to users of The Graph or to people within the web3 space.

(00:29:40):

And so, all these questions were sort of spinning around my head. Of course, another thing was the negative ecological impact of crypto, that at the time was off and on the news. Occasionally, you’d hear, “Okay, crypto has a bigger carbon footprint than,” such and such country. And so, I ended up having a lot of long conversation with friends, family, advisors, etc., who mitigated many of my concerns. I read about proof of stake and about how they would reduce Ethereum’s ecological impact. And in the end, I mean, I think I got to the point where I felt comfortable and because of all the people close to me were involved with the press as well. Everyone else was also sort of on the same page as me. And ultimately, if I’m going to take risks, if I want to explore something new, might as well be while I’m young.

Nick (00:30:29):

As you mentioned there, you left Sandia, went to work at Semiotic Labs. Semiotic Labs is one of the core devs helping to build The Graph. I’ve had teammates of yours on the podcast before from Semiotic and of course, I’ve featured a lot of members from the other core dev teams on the GRTiQ Podcast. For listeners who this might be their first episode or their first exposure to Semiotic, do you mind just providing a quick introduction to who Semiotic is, a little bit about the team and what they’re working on at The Graph?

Anirudh Patel (00:30:59):

Absolutely. Okay so, Semiotics expertise is in AI, data and then verifiability, I would say. So we have a good team of software engineers, as well as people with more of an AI backgrounds, people with cryptography backgrounds that contribute in various places, both within The Graph and within the web3 community more broadly.

(00:31:20):

Within The Graph, Semiotic is responsible for things like verifiable payments, verifiable indexing, verifiable querying. A lot of these are still in the works. We’ve also improved the Indexer stack with things like the allocation optimizer and [inaudible 00:31:35]. And beyond that, we’re also working on rewriting some of the Indexer stacked from TypeScript to Rust, which will provide some performance benefits. It’ll make the code less buggy, let’s say, in some ways. While also helping us with things like verifiable payments at the same time. More recently we’ve been driving conversations within The Graph to enable SQL and LLM’s as data services within The Graph ecosystem, so that people… And also in the future, people should also be able to use SQL to query The Graph. And people should also be able to, let’s say, Indexers or some similar role could run LLM’s and people could ask an Indexer for an LLM response, as opposed to OpenAI’s ChatGPT or something.

(00:32:15):

And so, these would all help with data scientists and let’s say, non-technical users to better use The Graph as well. Beyond The Graph, Semiotic also incubated Odos, which is a Dex aggregator built on our AI expertise. And more recently, we’ve started working on a new project called AgentC, that we’re also excited to bring to the community. So personally, I’m of course, mostly involved with AI and optimization, things like the allocation optimizer and [inaudible 00:32:41]. Because like I mentioned a few times, reinforcement learning has this game theoretic aspect. I also contribute to the critical economics side of The Graph. And more recently, I’m focused on our new product, AgentC.

Nick (00:32:54):

Well, Ani, as you mentioned there, world of data services is the first objective of the recent roadmap called, the New Era of The Graph that was released by The Graph Foundation. And, I suspect a lot of listeners are familiar with that announcement and hearing you talk about LLM’s and SQL all hint back to the vision for The Graph into the future.

(00:33:13):

And it’s fun to think that a team that has as much expertise as you and your colleagues do at Semiotic Labs, are working on it. And as we go back then and think about this transition you made from Sandia to Semiotics, you said early on, “Hey, this was new industry to me and you had some negative first impressions of crypto.” How have these perceptions changed since you’ve joined and gone to work on all these incredible projects?

Anirudh Patel (00:33:40):

Yeah, yeah, for sure. So definitely, I no longer think that the only value in crypto is the fact that you don’t have the sim-to-real gap in this space, so I’ve definitely had a lot of room to grow. As a researcher, I think most researchers are probably fans of open source. It makes our lives a lot easier. For example, if someone publishes a paper and I can’t reproduce the results, it’s a bad sign for me. And in fact, some conferences, now I’ve started hiring that people do publish code with their papers because people are cheating system, let’s say. But that said, I never really connected the dots between decentralization, verifiability and open source. And so more clearly, one thing that I was… So we have a PhD economist within The Graph ecosystem who works with the critical economic side and I was talking with him a couple weeks ago.

(00:34:30):

And one of the cool things about The Graphs ecosystem is that the Indexer marketplace is nearly perfectly competitive, which is a very rare thing. In fact, I can’t think of a real world example off the top of my head, although I’m sure our PhD economists could. And I think just that sort of property can provide incredible benefits to end users, like customers of Indexers, customers of The Graph, in that if we do our jobs right, and [inaudible 00:34:58], that we have a long way to go, I’m not saying we’re there yet, if we do our jobs right, customers should get the best prices, the best quality, the best latency, etc., just purely from the competitive nature of the protocol that we can create and engender.

(00:35:14):

And so, the one advantage that a centralized competitor might have over this decentralized marketplace is going to be the fact that, hey, if something’s failing, I can just call them up and say, “Your service isn’t working, here’s what’s wrong, please fix this for me.” And, they’ll hopefully put someone on that. But I think this is why at Semiotic, and at The Graph more broadly, we’re so concerned about verifiability, so that we can have Indexers that get slashed if they serve you false data or something like that. So, they’re disincentivized from doing that. So I think going back to your original question, what’s the value of web3 [inaudible 00:35:50]? I think we have the opportunity to create enormous benefits for consumers that centralized entities just can’t match. I think we have a very, very long way to go, of course, but the potential is there, let’s say.

Nick (00:36:03):

Ani, you mentioned this earlier and it’s part of the reason you and I are speaking today. We’re going to talk a little bit about AgentC. And again, this is a new project that the team at Semiotic Labs is working on. And, there’s a lot of alpha and interest in the community around a tool like this. And so, set the table again, what is AgentC?

Anirudh Patel (00:36:22):

AgentC is Semiotic’s new product. So I mentioned before, the relationship between reinforcement learning and intense in the DeFi [inaudible 00:36:31] space, just about how similar they are. Also, let me actually back up a step and just give a quick definition of intent in case people are not familiar with that term. So an intent basically refers to a user being able to specify a desired outcome or goal in natural language, so English or whatever language, rather than specifying the steps that they want to take in order to reach that goal. And so, you might think about these steps as being various things. It could be some code, it could be API calls. It can be making a swap via Odos, or Uniswap or whatever.

(00:37:03):

And so, in the blockchain or web3 context, intents allow users to express what they want to accomplish on chain, without having to dive into the exact details of the underlying transaction. And so, let’s say in The Graph context, a Delegator could express their desire to delegate. Many of the technical steps could be solved for them using solvers, such as finding the best Indexer for them. And then agents, which is for example, setting up the transactions that you need to execute, in order to delegate to someone.

(00:37:34):

So you could think about it as simple as you might type in a sentence, I have this much GRT that I’d like to delegate, please find me an Indexer and set up that delegation for me. So, at Semiotic, we really want to enable this. I think we’re well-placed to do this. So we have, of course, AI expertise. I mentioned Sam and I both have reinforcement learning backgrounds. [inaudible 00:37:58], who’s another AI expert within our space, also a [inaudible 00:38:02] on GRTiQ with me and Sam last time. His background is an AI language models. So more along the LLM space, let’s say. We have SQL experts with decades of experience. We have strong software engineers, cloud experts and web3 [inaudible 00:38:19], who understand what the community is going to towards, what web3 fundamentally should mean, what we need to embody within our product to be a part of the web3 space.

(00:38:29):

And of course, not least, we have the support of The Graph in this community. So the end goal, again, is for AgentC to move towards intents, but currently, we’re focused on what is the simplest problem with value we can solve. And the simplest problem is just taking your national language question, converting it to SQL, and then creating some plot or figure from that data, so that you can look at that and get some understanding about what that data means. So, it gives you the information of a dashboard with the flexibility of a chat bot. We first wanted to experiment with DeFi. We support Uniswap V2 and V3, also Uniswap VX as of this point, as of today actually, and their clones at this point in time. And as time moves on, we’re interested in many more features, such as integrating Dexus, so that you can swap notifications and alerts.

(00:39:16):

I mentioned applications beyond DeFi, such as delegation within The Graph. So we definitely don’t want to restrict ourselves to within DeFi, we want to provide value to the web3 community more broadly. And so, let me just give you the ultimate vision real quick. So The Graph is a platform, which web3 users can use to acquire data. We want AgentC to be able to use that data to enact intents. So eventually, in the same way that The Graph empowers dapps to use data to create dashboards on their websites, let’s say, we want to give our users with an AgentC the ability to incorporate intents into their workflows in a variety of ways. So for simple functionality, you can just use our web chat interface, which you can find at AgentC.XYZ. For more complex things, you might favor our API, let’s say if you’re a developer or you’re an institution of some sort. We’ve actually even discussed no or low-code solutions that the web3 community could use to quickly prototype new solutions.

(00:40:19):

Ultimately, I want to discuss again that reinforcement learning paper about fire retardants agents and reception agents that I discussed earlier. I keep drawing this parallel between intents and reinforcement learning. And I think the problems that we’re going to face, while building intents and [inaudible 00:40:40] more broadly the AgentC is going to face with intents are actually very similar the problems that I’ve been interested in throughout my reinforcement learning research. And so, you have entities with different roles and responsibilities working together to do something that you care about. And, you have these problems about coordination. How do I get these various things to work together, in such a way that’s cohesive and coherent? So I ask to solve the things that I want to, that I care about or do the things that I want to do. And so, these aren’t really easy problems. Forget modern reinforcement learning, but just classical reinforcement learning, optimal controls.

(00:41:16):

There’s, at least, a century of research on this topic and it’s still not anywhere close to being solved, let’s say. But, these are the problems that I’ve cared about for a long time. These are the problems that I’ve published in. And because of this, I think I’m very confident in our team’s ability to actually provide these services to the web3 community. We have this foundational understanding about where the challenges are going to be in the space, let’s say. So I’m also pretty excited about how within AgentC, we’ll be able to give back to The Graph. So, I already mentioned how we want to spearhead the SQL and LLM data services. The goal is to build AgentC on The Graph. Right now, we’re using more proprietary stuff, we don’t want to do that. We like The Graph or [inaudible 00:42:01] within The Graph, we definitely want to use that.

(00:42:03):

So once we have that SQL and LLM data service set up within The Graph, we want to transition AgentC to being built up on The Graph entirely. And at the same time, as we’re developing these capabilities for The Graph, other developers, other users of The Graph will also be able to get the benefits. We want to dog feed this a little bit. So as soon as AgentC is able to run an LLM within the network or make a SQL query within the network, you should be able to do that too. So, that’s another aspect of AgentC that we’re really excited about.

Nick (00:42:29):

Ani, that’s an incredible answer and there’s a lot to unpack there. And of course, this probably deserves its own standalone podcast, but just to simplify some of the things you said there and check it with you. So of course, feel free to correct me where I’m wrong. But, AgentC is this new project being launched by the team of Semiotic Labs. If listeners are interested in seeing it and seeing how it works, they can go to AgentC, that’s A-G-E-N-T, the letter C.xyz. And, this is basically an LLM built on SQL that you can ask questions to and it’ll project answers, it’ll give you charts and graphs. I mean, it’s kind of like ChatGPT for crypto in some ways. Is that a fair kind of synopsis of everything you just shared with us?

Anirudh Patel (00:43:13):

Definitely, definitely. And yeah, like I said, we are starting there, but we definitely want to move into more complicated, intense based things as well, where you’ll eventually be able to interact with The Graph using natural language, let’s say.

Nick (00:43:25):

Very exciting. So, I’ll put a bunch of links in the show notes for anybody that wants to click around and see some of these things, please visit the show notes. And I’m sure as all core devs working in The Graph, you’d love some feedback from listeners that are anxious to give it a shot. If people want to stay up-to-date on updates related to AgentC and other things that the team at Semiotic Labs is working on, what’s the best way to stay in touch with some of these updates and changes?

Anirudh Patel (00:43:50):

Definitely go to the website, sign up for a wait list. At this point in time, we still have a wait list. Maybe by the time this podcast episode goes out, we might’ve ended the private beta and release it to the public, so it just depend on that. We also have a Discord server that you could, of course, join and we’ll post a link to that in the chat here. And yeah, I’d say those are probably the two best ways. Other than that, probably the Semiotic.AI websites, where we’ll post updates as they come over time. But in-general, I would say the Discord servers were to be your best bet. We have channels where you can interact with other users, you can make feature suggestions. There are things that you really care about being within AgentC. And if you just want to keep up to date, there’s of course, our change log channel, which we will regularly update with new features that we’re adding.

Nick (00:45:25):

AgentC is just one example of some very cool things that the core devs are working on in The Graph ecosystem. And listeners can go get a glimpse of some other things that might be coming by reviewing that new era, the new roadmap that was released by The Graph Foundation. When you think about the future, Ani, in addition to all the things you’re working on, what makes you excited or optimistic about the future of The Graph and web3?

Anirudh Patel (00:45:48):

I’ll go back to what I mentioned before, in terms of our potential to be a more efficient marketplace. I think the ultimate recipients of this are going to be consumers. I think the big challenge that we have right now, or it’s sort of the fact that it’s just confusing for people to enter this space. There are a million different tokens that they have to worry about, if I want to do this, what do I have to… Transition my money here, and this and that.

(00:46:11):

And so, we have this UX program within the space more than anything, I think. And I think with AgentC with intents more broadly, I think this is just an attempt to solve that problem for consumers. So, I think there are two aspects, there are two things that make me excited. I think one aspect is the marketplace side, where we can create this environment in which people who participate will be able to immediately see the financial and quality of service, whatever type of impact on their own life. And at the same time, we’re also working on this other side within AgentC to improve the UX to bring people to this space, so that they can experience it in the first place.

Nick (00:46:45):

Ani, in my opinion, one of the things that makes being alive in our time, in our day and age is this 24-hour news cycle. And, it’s always a shock to me that something that carries all the attention for a week or a month is disregarded in such a short period of time. And of course, there’s a lot going on in the world right now geopolitically. And one of the things that was at the top of the cycle for a very long time was this discussion about AI, the risks of AI. And I’m sure people are still talking about it, but I think there’s an argument that mainstream media has moved on from it. But anytime I get the chance to speak with someone with your background, and I know this is a topic we’ve discussed before on the podcast, but just going back to it a little bit here, what do you make of mainstream media or maybe some of the talking heads concerns about the risks of AI?

Anirudh Patel (00:47:36):

It’s a tough question for me to answer. I would say that there are people with more expertise in this domain than I have. My focus is very narrow in a sense. And the way I look at things is that, there are a lot of advancements that we can make within the AI space, before I think we even have to get close to artificial general intelligence. Artificial general intelligence is that scary terminator style AI that people talk about in the news. Like, “Oh, this thing can take over the world, it can infect your computer, it can do this, it can do that.” I mean, in the meantime, there are applications that we’ve talked about today like helping radiologists at hospitals and improving medical outcomes within that space. People are using AI for climate modeling. People use AI for efficient allocation of resources to people below the poverty line. So there are already a million ways in which AI is helping people, and I think it would be a shame to disregard all of those things just because of some potential future risk.

(00:48:36):

I think whatever future risk may exist, again, this is not my domain. I don’t want to speak to something that I’m qualified to speak about, but to me, I mostly leave that decision-making to people who have expertise in that space. None of the problems that I’m working on, quite frankly, will get anywhere close to that. And so, I’m happy to continue to contribute within the AI space and to try to help people however I can using the things that I’ve learned.

Nick (00:49:03):

Another thing that makes you super interesting to me is, here you are, you’re working in web3, it’s an emerging industry. You’re working on The Graph among other things, but certainly putting your focus on building out The Graph ecosystem, that is an emergent protocol in an emergent industry, really doing some bleeding edge tech type work. And then you’ve got this AI component, which as I understand it, is a discipline, is also still evolving and there’s still a lot of research going on there. So, what’s the trick here? How do you stay up-to-date on AI, for example?

Anirudh Patel (00:49:38):

Reading a lot of papers and keeping up-to-date. So at this point in my career, I’m connected with a lot of researchers within the space. We have group chats, people are always posting stuff like, “Oh, hey, here’s an interesting paper that I think you guys should read.” So, that’s one aspect. Because of course, I can’t filter through whatever 1000 papers that are posted every day by myself. But after that, I think it’s mostly the way I keep up is through conversation and through reading papers. I have a goal that I should read three papers every day. I’ve not skimmed them, I mean actually read them. I’ll probably skim 10 papers a day, [inaudible 00:50:19] figure out which story I want to read. Yeah, but other than that, I don’t think there is a great way to keep up with AI, unfortunately.

(00:50:26):

I think like you mentioned, there tends to be a lot of hype around the space. And it doesn’t, even for me, it’s not necessarily easy to discern what is hype and what is not hype. And so, if you’re not going to follow on with papers or speak with a friend that knows about AI, I would really try to stick with trusted websites, like MIT Tech review or something. The last level down on the chain for me, is going to be something like a blog post. I think a lot of AI researchers at this point, we are somewhat unfortunately dismissive of blog posts. Because for the most part, of course, they have not been peer reviewed, and they will oftentimes just get basic facts incorrect. So there’s just sort of this aspect of, you’re having to sift through a lot of nonsense to find something valuable.

(00:51:13):

Of course, there could always be something valuable in a blog post. One of the funniest citations that people often make within AI is from this professor’s lecture slides. He never actually published this, it’s just from his lecture slides, [inaudible 00:51:28] slide, whatever, from this guy’s presentation. So there’s always valuable stuff from things that are a not peer reviewed, but it’s just a lot harder to find and you have to be a lot more discerning and careful. So I would encourage people, even if you read my own blog posts on the Semiotic AI website, take it with a grain of salt. I mean, I will do my best to be truthful and honest. And I’ll have, let’s say, other people within Semiotic or other AI researchers to read over it, but it’s not gone through peer review. We could have messed up somewhere.

Nick (00:51:54):

Ani, I only have two more questions for you before I ask you the GRTiQ 10. And the first one is, and this is a question I’m starting to get in the habit of asking, because I think it’s always interesting, but what makes you excited about the future of AI? And the question is, for average everyday people that don’t have your level of expertise, aren’t working on the front lines of AI and getting the insights by virtue of reading these papers and things, what’s the one thing you would want just everyday folks to know about AI?

Anirudh Patel (00:52:24):

I think the one thing everything should know is that, I wouldn’t be scared about some future AI threat. AI is already so integrated into your life, in sort of mundane ways that you don’t even notice. And I think that’s the way it’s going to continue for a while, still. So I mean, I’m really excited to see in which ways AI will continue to solve problems that we face.

(00:52:48):

I think from a more theoretical perspective, of course, I’m at heart a researcher, so I’m excited to see how people solve the problems that I’ve dedicated my career and my research so far to. So for example, there’s something called a mark of property, and this is when you move from single agent decision-making to multi-agent decision-making, you lose the mark of property and that ends up being a big problem for us. I’m also quite interested in following AI explainability, probably for similar reasons. I think why people in the web3 space are interested in verifiability. It’s hard to trust that black walks sometimes. And so, I don’t think we can move forward. In many ways, we’re going to be stuck in this loop where, as AI developers, we just have to say, “Well, we just have to trust us that the model is doing what it says it’s doing,” until we get explainability work a lot better.

(00:53:39):

In terms of impact, I think about AI as being on par with the computer in some ways. Before computers, doing long, complex calculations were, I mean, they were possible, but they would just take you forever and it was impractical, let’s say, to solve those things. And so, when the computer first came out, we were able to solve problems that we never thought we would be able to solve in the first place. And after that, we were introduced to a lot of new problems, like the entire field of computer science, signal processing for that matter. All these fields are post computer and I think of AI in a very similar way. I think at some point, AI is just going to become this ubiquitous experience, I think in a similar way to how we just interact without thinking with computers nowadays.

(00:54:27):

It’s just going to change the way we interact with things. I think at that point it’s really tough for me to see what’s on the other side of that. All these fields that came out of the computer, what sort of new problems, what sort of new challenges are we going to face once we have this inflection point within AI? And that’s another thing, if I’m alive for it, that’s another thing that I’ll be really excited to see.

Nick (00:54:48):

The last question I want to ask you is about traveling, and going back to your youth, and this hobby you had and really cool how you would immerse yourself in the culture, the food, the customs of culture before you would travel there. And, I really love that approach. How do you think growing up with an interest in travel and the approach you took to it, how do you think that’s influenced your approach, your perspective on life, AI, technology? Have you ever thought about that?

Anirudh Patel (00:55:16):

Full circle. Yeah. So traveling to me, it’s always about, in a way, some of the best memories I have while traveling are not when I was visiting the Sagrada Familia or something, like some things monument. I think you just learn a lot by getting lost in some unfamiliar place and having to find your way around, talk to people, interact with people. And there’s a sort of aspect, at least to the way that I like to travel. And just being comfortable in your ignorance, just knowing that you’re going to make some mistakes, you’re going to make some wrong turns, you’re going to say something and it turns out accidentally that this phrase, which Google Translate thinks is fine, but it turns out to be offensive in this country. You just have to be happy to be in that uncomfortable space.

(00:56:02):

And I think that influences how I approach my work, my research and my everyday life, even still. I’m really quite happy to be in the space where I don’t know everything, and I’m going to make a bunch of wrong turns before I figure out what I need to know. So I’m happy to learn new math, I’m happy to work on a totally new problem or research something. I’m happy to learn a new programming language, if I need to. Or even learn long technical skills outside of engineering. It’s just a matter of ultimately what I enjoy most, is that experience of being in a new place mentally or physically, whatever, and learning. And, I think that’s spans traveling and everything else.

Nick (00:56:45):

Ani, now we’ve reached a point where I’m going to ask you the GRTiQ 10. These are a listener favorite. I happen to enjoy them a lot myself. These are 10 questions I ask each guest of the podcast every week. And I do it because I think it helps listeners learn something new, try something different, or achieve more in their own life and with the added benefit of getting to know our guests a little bit better. So Ani, are you ready for the GRTiQ 10?

Anirudh Patel (00:57:06):

Let’s do it.

Nick (00:57:18):

What book or article has had the most impact on your life?

Anirudh Patel (00:57:22):

This is a really tough question for me. I really enjoy reading. You can probably guess from all my answers about traveling, but reading about these places is a big part of that. At any given time, I’ve got a backlog of a hundred books that I’m trying to get through. I’d say maybe anything by Peter Singer. I think he has a very interesting way of looking at the world in which he takes ideas that, let’s say, we commonly agree on, and then plays them out to their logical conclusion. It is sort of a way of thinking I’ve acquired from reading his books. For example, to take one of his examples, let’s say you’re walking home and you’ve just bought some really nice shoes for 100 bucks. And as you’re walking, you see a kid drowning in a pond.

(00:58:05):

This is a pretty shallow pond, let’s say, it’s only hip height for you. So you’re not going to drown, you’re not going to injure yourself by going and rescuing the kid, but you will ruin your $100 shoes. And so, Singer’s question is basically, “Okay, are you going to go save the child?” I think most people, there’s no hesitation here. It’s like, “Yeah, of course, who cares about my shoes?” “I’m going to go save the kid.” Okay, so now let’s say the kid is 20 feet further away. Do you still save the kid? Does it make any difference? You can still make it to the kid in time, let’s say. I don’t think this changes anyone’s answer, really. Okay, we can add another level. Let’s say you’re in the shop, you can’t see the kid, but you can hear the kid crying for help outside.

(00:58:43):

Does this change your answer? Again, most people are probably like, “No, I’ll just run out of the shop with my new shoes on, ruin them and go save the kid.” And Peter Singer turns this around, so this is from a book called The Life You Can Save and it says, “Well, isn’t this pretty much exactly the same situation that we’re in every day when we choose to buy something, instead of donate the money?” If you take that 100$ and you donate it, there are certain endeavors that you can put that money towards, so it’s basically one for one, where you’re guaranteed to save someone’s life. And so, there’s this sort of interesting question about, okay, we sort of all have these values, we have these, let’s say biases, values, whatever you want to call them. And I don’t think, at least I didn’t, until I read Singer’s work, try to take a lot of them to their logical conclusions and say, “Okay, well, what does this actually mean?”

(00:59:29):

And at that point, I went through this exercise of, okay, do I need to reconsider my values? Do I not like the conclusion that I’ve reached or do I need to change my actions to bear a line? Any of Peter Singer’s books are just really interesting reads.

Nick (00:59:44):

How about this one? Is there a movie or a TV show that you recommend everybody should watch?

Anirudh Patel (00:59:48):

I have a terrible memory for TV shows and movie. I tend to watch them, it’s just sort of time pass in the evening when I want to shove my brain on and not think, so I don’t have a great memory. Whenever people quote movies or TV shows at me, I’m like, “I have no idea what you’re talking about, even if I’ve seen it.” So I’m currently making my way through a Hindi language web series called Rocket Boys, which tells a story of the two founders of India’s nuclear program and India space program. I’ve heard season two isn’t great, I’m still on season one, so I can’t confirm that. But, season one has been good in case anyone wants to check it out. Like I said, I’m probably not the right person to ask this question.

Nick (01:00:26):

Well, let’s try another one then. If you could only listen to one music album for the rest of your life, which one would you choose?

Anirudh Patel (01:00:32):

This is hard for the opposite reason. It’s like the books, I like too many different types of music. So, I’ll tell you again what I’m listening to. Let’s say what I was listening to right before this call, actually, I was listening to an album called We Like It [inaudible 01:00:47]. It’s like a jazz album. And if you like jazz, I think the cool thing about this album, especially if when jazz musicians do improv, they often try to take quotes from famous songs in the past or songs they’ve heard in the past. This album has a ton of quotes. If you like jazz, you’ll hear all these quotes throughout, but in very interesting places and invited in interesting ways. So, it’s just a fun lesson.

Nick (01:01:07):

And how about this one, what’s one thing you’ve learned in your life, and I get the sense there’s a lot here, that you don’t think most other people have learned or know quite yet?

Anirudh Patel (01:01:16):

You’re going to make mistakes, learn how to say sorry, without making excuses.

Nick (01:01:20):

What’s the best life hack you’ve discovered for yourself?

Anirudh Patel (01:01:23):

Yeah, I think I’ll just go back to the whole Peter Singer discussion. I think it took me until I was 25 to actually think about my values and my actions. And like I was mentioning before, sometimes I found that my values led to conclusions that I didn’t like, so I had to change my values. And sometimes, I found that my actions moved along with my values, so I had to change my actions. But it’s just a really interesting, and I think useful experience to have. Because ultimately, our values are actions. These are how we interact with each other, this is how we treat each other. And so, in a way, it feels like before I went through that process, I was just sort of operating blind in a sense.

Nick (01:02:06):

Ani, based on your own life experiences and observations, what’s the one habit or characteristic that you think best explains why, or how people find success in life?

Anirudh Patel (01:02:16):

So I’m an EMAX user, and I know immediately there were a thousand [inaudible 01:02:21] users that just screamed at their computer and shut off the podcast. So for that reason, I probably feel compelled to say [inaudible 01:02:30]. [inaudible 01:02:30] Is basically a way to organize your notes. Basically, it stores all your notes into a database, it’ll take keywords from each of your notes and connect them with each other. And so, especially when I’m just writing down notes about papers that I’m reading, [inaudible 01:02:43] will often just automatically connect two different topics that I didn’t think to connect myself or something like that. And so, it helps me maintain this web of connections without having to keep it all in my own head.

Nick (01:02:53):

Ani, based on your own life experiences and observations, what’s the one habit or characteristic that you think best explains why, or how people find success in life?

Anirudh Patel (01:03:03):

This is going to depend on your definition of success. I don’t really like to play the game in which I compare myself with other people. I don’t like to play the game of, “Oh, if only I was richer or only I was more intelligent,” whatever it is. I mostly just think that those sorts of statements are harmful to my wellbeing. So to me, a successful person is just someone who’s learned to enjoy the process, rather than the goal.

(01:03:26):

I think this goes along with the research, half the research process that you do just ended, okay, well, that didn’t go anywhere, that didn’t work. And so, ultimately, there’s always someone richer, there’s always someone more intelligent than you are. And if you don’t fixate on what they have and instead decouple your happiness, your motivation from reaching where they are, I think you’re able to focus your energies in ways that are more meaningful to you. And I think that will just ultimately what leads you towards a better place, more success.

Nick (01:03:52):

Now, Ani, the final three questions are complete the sentence type questions. The first one is complete the sentence. The thing that most excites me about web3 is…

Anirudh Patel (01:04:01):

The effect it could have on improving consumers lives through more efficient marketplaces.

Nick (01:04:06):

And how about this one, if you’re on Twitter, formerly X, then you should be following…

Anirudh Patel (01:04:12):

I’m not on Twitter, so let’s say The Graph.

Nick (01:04:15):

And lastly, complete this one. I’m happiest when…

Anirudh Patel (01:04:19):

I’m working on something or doing something that no one else has done before. At heart, I’m a researcher.

Nick (01:04:33):

Ani, thank you so much for coming back on the GRTiQ Podcast. Again, for listeners that want to do a deeper dive on web3, crypto, AI, you can join a prior podcast. I’ll put links in the show notes, where Ani and some colleagues from Semiotic Labs came in and shared a lot of really cool information. But this was fun, Ani, to get to know you a little bit better, learn more about your background and of course, the release of AgentC. That’s super exciting as well. If listeners want to stay in touch with you, follow the things you are working on, what’s the best way to do it?

Anirudh Patel (01:05:04):

So, I’m not really on social media. I’m happy to connect with anyone on LinkedIn. If you want to follow my research or my papers, I would suggest just looking me up on Google Scholar. Of course, my blog post will be on Semiotic.ai. But in general, I would say we have our AgentC Discord. I recommend that as a way to get in contact, if you want to direct message me on that or something, that works as well for me. So yeah, I’m just really excited to hear the community feedback about AgentC and about what we can do to improve their experience within the web3 space.

YOUR SUPPORT

Please support this project
by becoming a subscriber!

CONTINUE THE CONVERSATION

FOLLOW US

DISCLOSURE: GRTIQ is not affiliated, associated, authorized, endorsed by, or in any other way connected with The Graph, or any of its subsidiaries or affiliates.  This material has been prepared for information purposes only, and it is not intended to provide, and should not be relied upon for, tax, legal, financial, or investment advice. The content for this material is developed from sources believed to be providing accurate information. The Graph token holders should do their own research regarding individual Indexers and the risks, including objectives, charges, and expenses, associated with the purchase of GRT or the delegation of GRT.

©GRTIQ.com