This is a special edition of the GRTiQ Podcast. As a result of all the hype and conversation surrounding artificial intelligence, AI and crypto, and things like ChatGPT, I’m speaking with three members of the Semiotic Labs team – Anirudh Patel, Sam Green, and Tomasz Kornuta. Semiotic Labs is a Core Dev team working on The Graph, and they bring deep expertise and experience in artificial intelligence. I doubt there is a more distinguished team working on AI in crypto.
If you’ve listened to the podcast before, then you know I’ve already featured two members of the Semiotic Labs team before – Sam Green, Co-founder and CTO and Ahmet Ozcan, Co-Founder and CEO.
During this incredible episode, Ani, Sam, and Tomasz explore what artificial intelligence is, the origins of the discipline, and the epic rise of ChatGPT and how it works. Then we shift our focus to talk about artificial intelligence and crypto, including an in-depth discussion of all the recent hype, and some legitimate examples of how AI is being used in the space, and then we explore how the team at Semiotic Labs is implementing AI at The Graph, including how they plan to do so in the future, including a fascinating description of how tools like ChatGPT, Geo, and The Graph could revolutionize the industry.
The GRTiQ Podcast owns the copyright in and to all content, including transcripts and images, of the GRTiQ Podcast, with all rights reserved, as well our right of publicity. You are free to share and/or reference the information contained herein, including show transcripts (500-word maximum) in any media articles, personal websites, in other non-commercial articles or blog posts, or on a on-commercial personal social media account, so long as you include proper attribution (i.e., “The GRTiQ Podcast”) and link back to the appropriate URL (i.e., GRTiQ.com/podcast[episode]). We do not authorized anyone to copy any portion of the podcast content or to use the GRTiQ or GRTiQ Podcast name, image, or likeness, for any commercial purpose or use, including without limitation inclusion in any books, e-books or audiobooks, book summaries or synopses, or on any commercial websites or social media sites that either offers or promotes your products or services, or anyone else’s products or services. The content of GRTiQ Podcasts are for informational purposes only and do not constitute tax, legal, or investment advice.
We use software and some light editing to transcribe podcast episodes. Any errors, typos, or other mistakes in the show transcripts are the responsibility of GRTiQ Podcast and not our guest(s). We review and update show notes regularly, and we appreciate suggested edits – email: iQ at GRTiQ dot COM. The GRTiQ Podcast owns the copyright in and to all content, including transcripts and images, of the GRTiQ Podcast, with all rights reserved, as well our right of publicity. You are free to share and/or reference the information contained herein, including show transcripts (500-word maximum) in any media articles, personal websites, in other non-commercial articles or blog posts, or on a on-commercial personal social media account, so long as you include proper attribution (i.e., “The GRTiQ Podcast”) and link back to the appropriate URL (i.e., GRTiQ.com/podcast[episode]).
The following podcast is for informational purposes only. The contents of this podcast do not constitute tax, legal or investment advice. Take responsibility for your own decisions, consult with the proper professionals and do your own research.
Sam Green (00:00:19):
We’re all excited about figuring out, what with these emerging capabilities, specifically with LLMs, can we do in The Graph? I think it’s going to be a highlight for our careers to see these tools once we start deploying tools based on these techniques in The Graph. It’s going to be very exciting for us.
Welcome to the GRTiQ Podcast. This is a special edition of the podcast. As a result of all the hype and conversations surrounding artificial intelligence, AI and Crypto, and things like ChatGPT, I’m speaking with three members of the Semiotic Labs team. Anirduh Patel, Sam Green and Tomasz Kornuta. Semiotic Labs is a core dev team working on The Graph and they bring deep expertise and experience in artificial intelligence. I doubt there’s a more distinguished team working on AI in all of Crypto. If you’ve listened to the podcast before, then you know I’ve already featured two members of the Semiotic Labs team before, Sam Green, co-founder and CTO, who you’ll be hearing again from today, and Ahmet Ozcan, co-founder and CEO. During this incredible episode, Ani, Sam, and Tomasz explore what artificial intelligence is, the origins of the discipline and the epic rise of ChatGPT and how it works. Then we shift our conversation to talk about artificial intelligence and crypto, including a discussion of the recent hype, some legitimate uses of AI and how it’s being used in the space.
And we also explore how the team at Semiotic Labs is implementing AI at The Graph, along with how they plan to do so in the future, including a fascinating description of how tools like ChatGPT, Geo, and The Graph could revolutionize the industry. I started the discussion by asking Ani, Sam, and Tomasz to introduce themselves and share how they came together at Semiotic Labs.
Anirduh Patel (00:02:42):
Hey, I’m Anirduh. I have been with Semiotic for a year now. My focus is really in AI and reinforcement learning in particular, excited to be on the podcast with you guys.
Sam Green (00:02:51):
Hey, this is Sam Green. I’m a co-founder and a head of research at Semiotic Labs.
Tomasz Kornuta (00:02:55):
My name is Tomasz Kornuta. I’ve joined Semiotic almost a year ago. I’m VP of Engineering/head of AI. It’s a pleasure.
Well, I want to welcome each of you and thank you for joining the podcast. As all of you know, there’s been a lot of discussion about artificial intelligence on crypto Twitter and throughout the industry. And so people are curious, what is artificial intelligence? How does it relate to crypto? And more specifically, what does it mean for The Graph? And so it’s very fortunate for The Graph community to have experts like the team at Semiotic Labs that can come in and do a special release like this and explore the topic. And so I want to start by asking about the expertise that the team has in artificial intelligence, because I think it’s really important when there’s so much noise online and within the industry, to address all this momentum. Can you provide an overview of the expertise that Semiotic team has in artificial intelligence?
Tomasz Kornuta (00:03:52):
So one of our strengths I think in AI comes from the fact that in the past we were working on multiple AI related projects, and our expertise comes from let’s say combination of AI with other domains. So we were working on neuromorphic architectures, perception, robotics, multi-agent systems, transport autonomy, and so on. As a result, our collective knowledge enables us to apply AI to novel domain, and blockchain is one of them, or apply to Graph protocol in particular. Here’s the fun fact, that we never actually spoke about our past expertise, we never had that open conversation. So at one point, I really got excited and I realized that hey, I’ll be able just to chitchat about AI with Sam and Ahmet. So when it comes to my personal expertise, I started working on AI almost 20 years ago, focusing first on robotics, manipulation planning and computer vision. After defending my PhD on active perception and active vision, I’ve been working 2D, 3D perception using sensors such as Kinect.
Next I joined IBM research where I met Alexis and Ahmet, who later became my manager of machine intelligence team, and we got very interested in multimodal machine learning and started to conduct research on visual reasoning, combining vision and language. At one point, I left IBM Research and joined Nvidia apps team where I extended my multimodal interest. After a few months, I got more involved actually in pure NLP research and worked on problems such as dialogue management, semantic search, leveraging the latest advancement in Beacon LP, which we right now are calling large language models. And at one point Ahmet approached me saying that we’ve got this grant from The Graph Foundation and the rest is history and your review guys.
Sam Green (00:05:37):
For me, I’ll start my ML history at Sandia Nashville Laboratories, which is part of the US Department of Energy. There I was analyzing cryptographic hardware for security weaknesses and I was actually doing a lot of data science, a lot of statistics during that time. The statistics then eventually turned into doing machine learning, and that eventually led me to pursue a PhD. And I left Sandia Labs and I went to Santa Barbara and I worked on a PhD in computer science. And during my PhD, I became really interested in decision making under uncertainty and reinforcement learning is one of the techniques that you can use to do decision making under uncertainty. And I ended up getting a PhD in efficient reinforcement learning, so that is when you want to make decisions either in high speed or low power. And while I was finishing the PhD, I went up to a workshop at IBM Research and that’s where I met Ahmet.
And then eventually after that, I met Tomasz and then very tail end of my PhD, at the end of 2019, Ahmet called me and said, “Hey, what do you think about starting a company?” And we decided to start a company with each other based on all of our shared interests.
Anirduh Patel (00:06:54):
For me, I actually share some history in a weird way with both of these guys. So my background is actually in signal processing. In signal processing in particular, I focused on medical imaging applications. And one of the big things going around back when I was in school was recognizing pneumonia and chest X-rays. It was a classical deep learning problem. And at the time we had papers coming out about deep learning algorithms that were outperforming teams of radiologists working together. It’s really state of the art. And so that sort of sparked my interest in this area. From there, I went to Sandia National Labs. I never met Sam at Sandia, we had some mutual contacts. And actually we worked on the same project briefly, but never talked to each other. So at Sandia I focused originally on the vision aspect of computer vision. So here’s some sensors, can we actually discern some data from it, recognize some patterns, make some decisions? And from there I got into the interesting part of decision making. So people typically think about the problem of controls as see, think, act, something like that pipeline.
Vision is the see, think and act I think is, at least to me, the more interesting problem, and that’s sort of where reinforcement learning is. And so for the latter few years of my career at Sandia, I really focused on multi-agent reinforcement learning research. And from there, I think mine and Sam’s Mutual contact at Sandia reached out to me like, “Hey, Sam’s looking for someone who knows some RL stuff. You want to chat with him?” I was like, “Sure.” And here I am.
Sam, to provide a little bit of context for listeners, can you just start off and give us a little background on Semiotic Labs, who you are and what you do with The Graph?
Sam Green (00:08:28):
Yeah, so Semiotic is a core dev team with The Graph. We’ve been involved with The Graph since the beginning of 2021, when we independently started to run an indexer. And we’ve been running a high performance indexer since that time. It’s a semiotic-indexer.eth is our name if you go and look at the Indexers. Then after that time, we got involved with the grant program of The Graph, and our first grant was related to AI and we started doing reinforcement learning to test how agents that could set prices would affect the protocol. And then shortly after that grant, we got another grant during the next wave of grants and we expanded our scope into cryptography. And these days we’re doing a lot, we have a whole team dedicated to cryptography where we’re working on micropayments and we’re starting to work on verifiable indexing, verifiable queries. So that’s a summary of who we are and what we’re doing.
Ani, I want to go to you with this first question as we start with this just very general question, what is artificial intelligence? What are we talking about?
Anirduh Patel (00:09:37):
Yeah, that’s a great question. I think to start with a higher level question is just what’s intelligence? What are we making artificial? And this is just unfortunately one of those subjects that’s like to this day incredibly controversial within the academic sphere. The high level undergraduate definition you might read in a textbook, is going to be something like the ability to learn and adapt. A lot of people have various problems with that definition that I guess I won’t go into here, but I guess to me, this concept of intelligence fractures quite a bit into a lot of smaller pieces. So we can talk about artistic intelligence versus numerical intelligence, spatial intelligence, and it’s tough for me to compare is an artist more intelligent than a mathematician or the other way? Those are apples and oranges, really. And so I think the definition of AI, I like for this reason because it’s just so difficult for me to define intelligence, actually comes from John McCarthy, who’s one of the founding fathers of AI.
He defined AI as getting computers to do things, which when done by humans, are said to involve intelligence. Pretty simple. So the reason he liked this definition is just because it sidesteps the issue of defining intelligence. And I like it for much the same reason, it lets me just get on with my work and forget about that problem. I think most AI practitioners realize that AI is really an umbrella term and so I’ll say I do reinforcement learning work, Tomasz might say he does NLP work and so on.
Tomasz Kornuta (00:10:59):
We’ve been discussed this topic a little bit with Ani in the past. I believe in the theory by core gardeners, it’s from ’80s and it’s there multiple intelligences. And basically that the curve, that idea is that there is more than one modality or type of intelligence that we can distinguish. Ani already mentioned logical mathematical like abstract reasoning, Einstein is a good example of a person that was drawing among that access. But if you think about that, there are people who are word smart, linguistics smart, like Shakespeare’s, all the writers, people who write poems. I cannot do that quite well, but it seems that for example, ChatGPT can do it quite well. That’s why I wanted to bring this up. That there’s some skills that I think AI advanced more along some of those axes. Other ones are, I know like spatial painters sculptures or musical intelligence. It’s quite interesting. The natural consequence of this is that each of those modalities of intelligence we measure in a different way.
Well, I really like this idea of modalities of intelligence and the different ways we might think about this. It’s also refreshing to hear that people like Semiotic Labs, who has deep expertise in artificial intelligence, acknowledges that the field is still growing and expanding and that maybe some of these definitions of what exactly artificial intelligence is, is something that even the experts continue to debate. I do think it would be interesting for listeners to unpack a couple terms that typically show up in this bucket of artificial intelligence and to understand the nature of how they relate. So how is artificial intelligence different from things like machine learning or reinforcement learning or even things that we hear like large language models. Can you help us understand how all these things kind of fit together?
Anirduh Patel (00:12:50):
Yeah, this is a great question because I think a lot of people tend to get this mixed up and even, I mean as AI people are probably guilty of this, where we tend to mix up the terminology as well just casually because we know what we mean. Let me start with three terms that you hear quite often, which are artificial intelligence, AI, machine learning, ML and deep learning DL. So AI we just discussed is sort of this umbrella term, it just captures all the things that are underneath it. Machine learning is one of those terms underneath the umbrella, it’s one specific subset of artificial intelligence. So ML in particular is concerned with how can computers learn from data as a high level definition? And to do this, typically we use some sort of statistical model. The evolution of deep learning, which is a subcategory of machine learning, is just that statistical model that we use is a deep neural network. It’s a very particular form of powerful learning algorithm that we found to be quite successful. You ask about large language models. Large language models are trained using deep learning, using deep neural networks.
And so LLMs large language models also fall under this deep learning category. There’s also reinforcement learning. Reinforcement learning is people might classify as a subcategory of machine learning. It has quite a different history, but it’s about instead of learning directly from data, you let the agents or the computer go out into the world and play around with stuff. And so the classic example here that people might know is AlphaZero, that chess playing AI that was able to beat grand masters. So those are the differences between AI, ML, deep learning, reinforcement learning. But just to make a point, outside of machine learning, I just want to make it clear that there are other parts of AI. So one common example that people probably aren’t familiar with is Fuzzy Logic. Fuzzy Logic is one of the most common uses of AI in the wild. It’s in your washing machine, it’s in trains, it’s all over the place. And Fuzzy Logic is just about how do you bring human subjectivity into computers?
So if I ask you the question, is it warm outside? You might give a different answer from me. So there’s no true or false answer, but computers are always true or false. It’s ones or zeros. It’s all binary. So Fuzzy Logic is about how do you bring that aspect of human cognition to computers.
Sam, how important do you think it is to make these distinctions? You’ve got artificial intelligence, and as Ani just explained there, you’ve got machine learning, reinforced learning, all these different things. Is it important for the average everyday person to make these distinctions or are these types of distinctions reserved for experts?
Sam Green (00:15:12):
Oh man, that’s super interesting. So I mean, first thing that comes to my mind is do we need to know the differences between other domains that are influencing our life? So machine learning and AI, they’re similar fields. ML is a subset of AI. But if you go to the movies, let’s say as a consumer at the movie, do you need to be familiar with talking about the different genres of movies or can you just go to a movie and enjoy it? Probably you don’t need to be an expert in cinematography to go to a movie and enjoy it. Similarly, with these AI tools, do you really need to know all the technical terms and all the subfields for AI to have an impact on your life or for you to use these tools? No. So if you’re interested in digging into this field, of course that’s awesome but you don’t have to know all of these details to either benefit or not from the tools that are emerging.
Ani, a natural question would be why does humanity need artificial intelligence? Things like reinforcement learning, machine learning, where’s the utility?
Anirduh Patel (00:16:17):
It’s similar to asking why do we need microwave telescopes? It’s because they enable us to see things that we wouldn’t be able to see with our biological capabilities. So the whole point about the machine learning endeavor is really to recognize patterns, and it’s tough for humans to be able to recognize all patterns just because we live in this space where we don’t have access to a complex mathematics. I can’t do crazy math in my head. This is what computers are exceptional at though. Computers are great at cranking through numbers. And so sort of the classic example here is the support vector machine algorithm. It was before neural networks, probably the most popular algorithm out there. And support vector machines work by projecting your data into high dimensional spaces, potentially infinite dimensions. Now I struggle in three dimensions, I can’t do infinite dimensions. So why do we care about this? As an illustrative example, you can imagine a scene where there’s a table in front of a sofa. If I try to draw this on a piece of paper, what happens is that the table and the sofa start to exactly overlap.
Of course we see in three dimensions so we know actually the table is in front of the sofa, it’s not on top of the sofa, the data is not on top of itself. In the same sense, you can move data into higher dimensions to make data that is not separable or not distinguishable, patterns that you can’t see in lower dimensions, just more apparent, and this is what machine learning excels after really. This isn’t what every ML algorithm is doing, but the notion still holds. Generally it’s all about pattern recognition.
Ani, I love it. So artificial intelligence and these types of things are going to help humanity identify and observe patterns and things that we just can’t do on our own, and I think that’s a super helpful way to build the context on why this is useful. But I also have to play devil’s advocate here and Sam, I’m sure you’re aware that in pop culture and even in some recent podcast interviews, there’s this concern that AI’s going to kill us, that this is actually existential. How do you feel about those types of arguments?
Sam Green (00:18:14):
Well, they definitely get my attention and I have a response to these sorts of arguments. I’m getting these actually naturally incoming to me, people sending me these doomsday things on social media or whatever. And if you ask me if AI is going to kill us, I’m going to give two answers. I’m going to give a spicy answer and then an analytical answer. The spicy answer is maybe, but let’s look back and look at the impact that technology has on us. So what’s the first technology? Probably rocks. That’s probably our first technology. What was the second technology? Probably fire. One of the things that I used to do is take a bunch of wilderness survival classes, so I learned how to make fire with my hands. If you don’t think or someone listening doesn’t think that that making fire is technology, go outside without looking at YouTube, try to make a fire. See if you can make a fire without looking up how to make a fire. It’s not easy.
Okay once fire was made, I am sure that people started burning down their homes and burning down forests, burning down their whole environment on accident. Humanity could have gone and should have humanity been like, “Oh my God, what is this fire that you’re playing with? We should not be playing with this fire and let’s ban playing with fire.” Okay, we could have done that. Similarly, let’s jump to now with AI. The difficulty with banning fire is you’d have to ban sticks. The difficulty with banning AI is you have to ban math. How are you going to outlaw certain math from being computed? I mean, this should resonate with the web three community. How are you going to ban certain math being computed? How are you going to detect that certain math is going to be computed, especially if it’s not connected to the internet? So this is a technology that’s similar to fire. There’s a risk but in my opinion, it’s part of the evolution of humanity in terms of a tool using species. So that’s my spicy answer. The analytical answer is that doom is way more interesting than positive outcomes.
Do a Google search for ban on AI trope? There are way more movies and books about AI gone rogue than there are about positive applications of AI. Why? Because it’s not interesting. Thinking about these positive outcomes is super boring. In fact, it will be boring. It’s going to blend into our lives in five years from now. No one will even remember ChatGPT because it’s going to be running in the background and they’ll just assume that it’s a capability that should have always been there. So likely what’s happening with all this interesting fear that everybody’s attracted to is it’s going to going to be correlated with funding in AI and there’s going to be a spike in funding, then there’s going to be basically a saturation of abilities and then there’s going to be a crash just like every hype cycle. And so what we’re seeing is just probably an acceleration of a hype cycle in my opinion.
Tomasz and Ani, want to give you a chance to weigh in on this argument. Let’s start with you Tomasz. I mean, what are your thoughts?
Tomasz Kornuta (00:21:09):
Thanks. That was fascinating stuff actually. So I did my PhD in robotics, so people were asking about okay, are killer robots coming and going to kill us soon? And I was getting those kind of questions for two decades right now, and my answer was always the same. And luckily, I don’t know whether you guys know or remember about DARPA Robotics Challenge 2014, 2015, and there were those funny movies when the robots were slipping because they couldn’t grab a door handle. And my answer is still the same, as long as the robots cannot open the door, you’re safe. That’s my take.
It’s a good take. Ani.
Anirduh Patel (00:21:46):
I love Tomasz’s take. I think this is actually one of those times I’d want to be careful about terminology. So AI is just artificial intelligence. Like I said, that’s an umbrella term. The AI we’re talking about in this context is something that in the industry we call AGI, artificial general intelligence. It’s basically an intelligence that could replace a human, like a super intelligence. Right now we’re really on AI. We’re nowhere close, in my opinion, to AGI. And the thing for me is that because I think we’re just so far away from AGI and because I think there’s so many ways we can improve people’s lives using AI, I don’t think we stop here. I think we have further we push before we can get into discussions about AGI. It’s not something I’m confident or I even suspect we’re going to stumble into by accident.
Sam, you made an interesting point in answering that question about one day, this will be so blended into our normal lives, we won’t even know that it exists and we’ll take it for granted. And so it raises this question of where did AI emerge from? Where did this discipline come from and who’s using it? How long has it been around? Tomasz, how would you answer that question?
Tomasz Kornuta (00:24:08):
Yeah, so I’ll start from the origins. Of course, as you can imagine, there is more than one starting point, but one that AI researchers start from is the Dartmouth College conference in summer ’56 when John McCarthy, Marvin Minsky, Nathaniel Rochester and Claude Shannon coined the term artificial intelligence. That was the proposal basically for that summer conference. And that really started the field of AI. However, so that that’s already like 70 years of research we are talking about. 70 years when we’re seeing people advancing AI. Funny fact is that actually neuronets are older than this. So the first computational model of a neural network, computational model of a neuron was created by McCullough and Pitts in ’43. So now we are talking about 80 years of research. So the history is very interesting and we are talking about eight decades of research and at the beginning it was driven mostly by the universities with some government foundings and so on. In the meantime, we’ve got those AI winters as I mentioned. But interesting is that one of the AI founders, Marvin Minsky was one of the persons that kind of brought the first AI winter.
He wrote a book called Perception in which he criticized neural networks saying that they’ve got very needed capabilities that basically stopped founding for decades and stopped research. Well, but it recovered. Likely there were a few people that didn’t stopped working and do the three recipients of 2019 three awards. So Jeff Hinton, [inaudible 00:25:43] and Joshua Benja, they were not only doing research, but also they were trying to deploy artificial intelligence, deploy neural networks, put them into production. And one prominent figure I already mentioned, [inaudible 00:25:56] he work on handwriting recognition and OCR, optical character recognition. And he applied that to bank checks, recognition of checks that people write. And he was working with US banks and deployed that in 1990 and 2000, and that was actually one of the biggest AI systems in production that was on large scale.
And I think that what we are observing now is okay, there is a hype and so on, but the founding won’t go down because right now the big corporations, the big companies, big tech companies and others are seeing that actually we can make money on this, we can save money, we can be more productive, we can solve some problems that otherwise will have to hire hundreds of people and it’s just not going away. I don’t think it’ll go away. And right now, I see that we are just surrounded by applications. Whenever you look for a product on Amazon or next movie to watch on Netflix, whenever you drive you’re not so fully autonomous car and it is breaks on its own. Well, it’s a AI. Whenever you want to try to find a route to your restaurant using Google Maps, here go AI. You take a picture with your cell phone, you got AI down there. So AI is everywhere.
Tomasz, that’s an incredible overview of the history of artificial intelligence. And to think that some of this goes all the way back to 1943 I think will be a shock to listeners because it feels so modern, it feels like something that just appeared. And I think a lot of that is probably tied to the fact that ChatGPT has garnered so much interest and once people started using it and it made news all suddenly artificial intelligence was this new thing. But in fact the roots are deep and the discipline has a rich history. When you guys saw ChatGPT, what were some of the initial impressions? Tomasz, for you, did you think this is fake artificial intelligence, this is the real thing or this is just one tool that already existed?
Tomasz Kornuta (00:27:54):
Well, I’m aware of whole evolution of GPT models, but I was super excited and especially that in Nvidia I was working on related the technology, I was working on semantic search, I was working with people working on chat bots and chatGPT had the essence that start maybe from the definition. What is that? It’s a chat bot, which means it’s a program designed to perform online chat conversation. So under the code it has a language model called GPT and which stands for generative pretrained transformer. And actually it’s like GPT3.5, which means like it’s a generation three and a half. So what is the language model? So a language model is a program that models probably distribution over sequences of words. So there is no magic down there. If you’ll think about taking all the books in the world and looking what are the words coming after each other? This is what the language model is like trying to capture, that pattern. As Ani mentioned, those hundreds of millions of trillions of patterns.
And then what’s inside of that GPT is really a neural network, a neural network based on transform architecture, which is various and training as specific way. So what is a neural network? In short, is a computational model inspired by biology that neuros that we’ve got in our brains, in animal brains, and assumes that there are some simple units called neurons that are organized in layers that are interconnected. And when we are changing a neural network, we basically changing the weights of those connections between layers and murals in those layers. That’s the most elementary explanation of what is happening in here. And in case of language models, those connections, I make a huge simplification in here, but there represents those transitions between a word. There is this transition between two words and that can be represented as a connection between two neurons.
Sam, have you played around with ChatGPT or the new Bean chat?
Sam Green (00:29:54):
I’ve played around with ChatGPT a lot. At first, I was doing I think what most people do they say hello to it or can you tell me about some topic? And then I saw there are many researchers that are publishing papers based on being able to ask these large language models questions in a certain way and extracting more information out of these things. So I’ve been playing around with it trying to see if they can solve certain mathematical calculations and trying to help it when it gets stuck. I taught it to play a game of 20 questions where it needs to ask a question and I can say yes or no until it can either can or cannot get to the object that I have in my mind, and it actually did that really well. So I’ve been playing around with ChatGPT a lot. ChatGPT has been deployed by Microsoft now in Bing and I’ve been following that conversation and what’s been happening there, but I haven’t had access to Bing yet.
Tomasz Kornuta (00:30:51):
So I just wanted add that I’m playing with it almost every day. And one for instance that yesterday doing our daily Kanban, I have asked ChatGPT to generate a poem about web free company demanding SOC two and head of AI prime and he did awesome job actually made my day. I love it.
Ani, what do you make of ChatGPT becoming the poster child, if you will, of artificial intelligence? I mean you’ve been working in the field for a long time, so have your colleagues, the Semiotic Labs. How do you feel about that?
Anirduh Patel (00:31:24):
Honestly, I have mixed feelings and I think the three of us will probably give you quite different answers here. For me, the positive is really that people are engaging with AI, they’re interacting with AI in a way that they haven’t before. But to me the focus of AI should really be how can we help people? And we have ML algorithms that that can work with doctors to save people’s lives. We have Fuzzy Logic algorithms like I talked about in your washing machines and trains, whatever that are improving energy efficiency. We have people using AI for climate modeling. There are just so many ways that people are using AI in really creative ways to improve people’s lives. So I wish that that’s more of what was in the news compared with ChatGPT.
Tomasz Kornuta (00:32:02):
Can I add something in here? Because I wanted to bring some other examples that I think it’s important to understand that people are saying that ChatGPT is the biggest, well commercial success right now of course, but if you like ChatGPT, that’s awesome, but there… Just look at what Boston Dynamics is doing with robots doing clips. I’ve got those movies all the time from plenty of my friends. I acknowledge my mom using Roomba. She bought Roomba and Roombas in her apartment. You’ve got those crews and Waymo working autonomous tax system, they’re deployed in San Francisco. You’ve got face ID in your phone. You’ve got 20 million of images generated daily on those four main generative AI portals like journal AI and so on. So those are all huge commercial successes of AI. I really would put to think that it’s not just ChatGPT that is there. And I literally couldn’t imagine that five, 10 years ago.
Sam, do you have a sense for, given the fact that AI is being used in so many different ways as Ani and Thomas shared there, why ChatGPT moved to the front of the list and got so much attention?
Sam Green (00:33:09):
it goes back to Ani’s example about the microwave telescopes giving new powers to humanity. Microwave telescope, it empowers scientists who know how to operate astronomers, that empowers them. So they were probably ecstatic when microwave telescopes came out, but they don’t help the everyday person. ChatGPT is a tool that can amplify my intelligence, it can amplify your intelligence. Anybody who sits down with ChatGPT automatically feels within seconds that they have maybe some sort of new superpower that they’re learning to use that they can control. So I think this is why ChatGPT has shot to the top of everybody’s minds more than any other AI breakthrough or AI advancement in the past.
Tomasz Kornuta (00:33:57):
Please look at the numbers. 1 million users in five days. This is really mind blowing, and it comes from two most important facts outside of the fact that it’s really great tool. A, the fact that humans just communicate with natural language, you don’t need any skills to use this. You don’t need to buy yourself a fancy car, you don’t need your fancy phone and so on, you just open the website and start working with it. Second, you need to understand that this next version of the same let’s say approach, and let me provided two interesting facts from 2021, 2022 about GPT3. Already in March, 2021, it was outputting 3.1 million words per minute. Then in June 2022, ChatGPT had 1 million users. That was already a huge user base and people were ready just to move on to the next level to the next tool that is just improvement. It’s a much better model, it’s a much better chatbot, but the user base was there.
Well, in a lot of ways we should be super excited about the fact that AI is now the center of so much attention because it has a lot of real world applications, and I want to turn our attention to how it applies to the world of crypto and particularly The Graph. But before we leave this topic of ChatGPT, Ani I want to ask you this final question about accuracy and level of confidence. Despite all the headlines and everybody’s interest in ChatGPT, this is not a perfect tool. I mean, it makes mistakes and it hasn’t always gotten it right.
Anirduh Patel (00:35:38):
Yeah. This is a known problem. So Tomasz mentioned earlier, he talked a little bit about how these models work, how it’s just basically making some a probability distribution over the next word in the sentence, so to speak. It’s an oversimplification, but it works. The thing is probability distributions don’t distinguish between true or false. ChatGPT internally doesn’t have any sort of notion of true or false. So I think the large language model community definitely recognizes this as a big problem. A lot of very smart people are trying to solve it. In theory, it’s probably solvable. There’re interesting approaches people are taking with knowledge graphs, with reinforced learning based training of these models. Just know if you’re using these models, no it’s not going to be perfect, no you should not rely on it as a source of truth. But yes, there are very smart people trying to figure out how to get it to be more reliable.
Sam Green (00:36:25):
I wanted to add a little bit about how ChatGPT is trained. Basically, all of the data on the internet was collected and ChatGPT is trained with what’s called self supervised learning, meaning the training data set is the source of information for how it learns. Now, how many websites have you gone to where you read something and you’re like okay, that is false. I think every single day you probably read a bad take on the internet. Well ChatGPT is trained on all of those bad takes. So this thing has no clue what true or false is. This is on top of the problem that Ani gave. This thing was trained on every piece of information that could be scraped from the internet. Basically, the internet you could think of it as a garbage dump, and this model has gone through every piece of garbage and tried to extract every pattern that it could, and we end up with something very interesting.
Let’s turn our attention to crypto, and this will be the most interesting topic for most of the listeners today because crypto Twitter came alive over the last month or so in relationship to AI projects and AI tokens. The Graph by virtue of all the innovative work it’s doing, especially in association with the team at Semiotic Labs as a core dev, it was brought to the forefront in this conversation. So let’s talk about that. Sam, how should listeners parse through what it means to have AI projects in crypto? Where do we even start? What does that even mean?
Sam Green (00:37:58):
Where do we start? So it makes sense why everybody’s excited to meet and the excitement was triggered by ChatGPT, which from here on the conversation, I’m going to generalize as large language models or LLMs. ChatGPT remember is a type of LLM. Everybody is really excited at the end of the day because we’re now seeing what LLMs can do. Actually, we do have a real breakthrough in my opinion, and the types of problems that could be solved. This is what we now know. And because of this just general excitement in what LLMs are doing, we’re going to see over the next few years, a lot of startups claiming to be built on top of LLM. So there’s going to be a lot of VC money. There already is. There are a lot of products already started, a lot of companies already starting based on LLMs, a lot of VC money based on LLMs. Now let’s focus in crypto. If you said something was an AI token, then that’s like telling me that the project is a math token.
My question is what is going to be done with the math? It’s a very nebulous sort of thing. You need an actual application. Let’s look at Google, for example. Google employs more AI researchers and have done more to advance the state of the field of AI than any other company that I’m aware of, but they don’t build themselves as an AI company. The thing is though is they have attributes that make it so that AI can amplify the data that they do have. So similarly, when I look at crypto, I’m interested in okay, where can AI help the projects that exist in crypto? A few come to mind. Filecoin is one, Livepeer is one, and of course The Graph is one. Filecoin and Livepeer, by the way just for audience, I know the team there. Those are really familiar projects to me, that’s why I’ve mentioned them. But all three of those that I mentioned, we have a ton of data that’s organized in some way and data is really what you need if you want to start doing interesting AI applications.
Tomasz Kornuta (00:40:03):
I would just like to add that Graph sold the value of AI already a few years ago, that’s why we received a grant from the foundation became one of the core devs. I don’t think that it was the hype, it was rather the potential of the technology and the fact that Graph operates on large amounts of data. What I’ve seen over the last month really is that I saw several web free companies that just took shady, and I don’t want to offend anyone, I just want to say that the other project, they’re saying, “Oh, we are becoming great in a way at Token.” I’m looking at this like there’s no value proposition. They’re just trying to stitch together some things that do not make sense to me as an engineer, to me as a research scientist, to me as a AI researcher. They’re claiming that it’ll make AI better. I’m not convincing here. And what I’m rather concerned is that they’re trying to ride that AI wave, which is okay, but it can backfire to all of us, the whole community. This is what I’m afraid of.
Tomasz I mean, you really make an important point there, which is the minute Twitter came alive with all the speculation and noise related to artificial intelligence, but we can’t really castigate the utility of artificial intelligence and crypto because I presume there’s legitimate uses right now. So Sam, what are some of the legitimate ways AI is being used in crypto presently?
Sam Green (00:41:22):
There are a number of groups and projects that are using AI, and something we haven’t talked about is that AI is adjacent to some other techniques like control theory or optimization. These are other mathematical techniques that often involve computers to solve problems very efficiently or to control systems very efficiently. And sometimes you can pull from AI to solve a particular problem that you could also pull a solution from control theory to solve. So these are complimentary techniques. So I’m going to, for this conversation for right now, I’m going to lump control theory, optimization, AI, into the same family of techniques. So let’s look at some projects. The Ethereum Foundation’s robust incentives group, they use reinforcement learning to analyze changes to the protocol before they’re made. For example, with reinforcement learning, you can train agents to act autonomously, and the robust incentives group has used reinforcement learning to test to see if certain changes are going to be weak or not.
We also have groups like Block Science. Block Science is a team of researchers that we’ve worked with in The Graph, they have a deep background and control theory and they’ve had impacts on things like the RAI Stablecoin. It uses control theory to keep its peg. We also have Gauntlet. Gauntlet is kind of similar to block science and they use unique approaches. They’ll use things like Monte Carlo simulations, so they’ll implement agents that have random behaviors to stress test protocols to see if parameters in the protocols need to be changed. And of course, within The Graph we have quite a few efforts that we’ve deployed based on AI.
Well, let’s talk a little bit more about that. So Ani, as I mentioned, The Graph got grouped in with all these different AI crypto projects and there was speculation on both sides, that it belongs there, people said it doesn’t belong there. In terms of how The Graph is presently using AI, how would you characterize that?
Anirduh Patel (00:43:34):
Yeah. So maybe to follow on from Sam’s example, I’m going to sort of group in stochastic optimal control and optimization with AI techniques more broadly. I think there are maybe 2.5 ways that we’re using AI currently within The Graph. So one thing that we have is a system called AutoAgora. AutoAgora helps Indexers understand the cost on a query basis to serve each query. There’s an additional part to AutoAgora called AutoAgora agents. AutoAgora agents is using reinforcement learning techniques to dynamically price Indexers queries, so as to maximize the revenue that they receive from the gateway. So that’s one way in which we’re using AI. Another way is to actually stress test economic mechanisms or proposed economic mechanisms. One thing that reinforcement learning in particular is quite good at is breaking things. You give it some code with a bug in it and it’ll find the bug and it’ll use it to reward itself somehow.
So in the case of The Graph, what we’ve done in the past is we’ve looked at for example, changes to the indexing reward that were proposed, the associated stake mechanism, and we’ve used these AI techniques to investigate whether this change would actually result in the outcome that we wanted. In that case, we found that it didn’t, so it didn’t really progress any further. We also have something called the Allocation Optimizer, and this is where I’m putting in my 0.5 AI. The Allocation Optimizer is using more classical optimization techniques to try to help Indexers to allocate their stake under subgraphs in a way so as to maximize the indexing rewards they receive. If you’re curious about any of these, Allocation Optimizer and AutoAgora both have blog posts on Semiotics website that you can read up on. We also have a yellow paper up on archive on AutoAgora, and we’re currently working towards a more technical paper for the Allocation Optimizer that hopefully you can check out in the near future.
So a lot of resources for any listeners that want to learn more about how Semiotic Labs is contributing to The Graph and using artificial intelligence to improve the protocol. But I’m curious about how it’s going to be used into the future. What are some of the exciting ways in which artificial intelligence might be incorporated more into The Graph?
Tomasz Kornuta (00:45:39):
Yeah, that’s a great question. So the solution that I mentioned are production. What I will be talking right now are the ideas that we are spinning and the projects that we are starting. Of course, by looking at what LLM are doing, we are thinking about some kind of a natural language interface to The Graph. So what kind of, let’s say applications of LLM would be useful for people using Graph developers and so on? So let me just mention briefly three directions that we are actively discussing and brainstorming right now. One of them is imagine a situation in which the customer expresses a query in the form of natural language and the model just generates QL query and that is sent to the indexer, that’s one. Call it text to GraphQL problem. There are similar problems already being solved in the web to community text to SQL, let’s say. Second direction that we are thinking is that imagine a situation where the customer has a GraphQL query that has an error and he or she has the model to correct it, so a GraphQL query correction problem.
There are few startups and there are few solutions already deployed for other domains where people are using language models, using LLM to correct the code to detect the errors. Actually, you can use even us ChatGPT to do that to some level. Third direction, third idea that we are discussing is that, okay, the customers has actually a query and wants to optimize the given let’s say execution time. So we were discussing a few times in the past together with the foundation, how different types developers are using Graph. And one of the broad points like, okay, we actually are speaking that they could optimize their queries, but probably they’re not the super prolific GraphQL specialists. So I really think that in this GraphQL query optimization problem, this is an area that we would like to really focus on. One possible application that you can about a huge impact and bring down the costs, it’ll be just better for the whole system.
Sam, double clicking on what Ani said, I want to further expand this to the extent that it sounds to me like artificial intelligence can be used within the protocol optimizing things like the way that query fees are paid to Indexers, but there’s also this idea of how it will integrate or be used outside of the protocol. Can you expand on that just a little bit there? I think that’s really interesting.
Sam Green (00:48:05):
Sure. One of the things that’s really interesting about the data that The Graph provides is it is focused on being very accurate. So you heard me complaining earlier about how the information on the internet is wrong a lot of the time. In The Graph, if the information is wrong, then the indexer is going to get slashed. So all the data ends up in The Graph needs to be accurate. And so this accurate data is very valuable data because it lets you do decision making more efficiently if you know that the inputs to your decision making algorithm or process are going to be accurate of course. So if we focus just on the AI conversation, we’ve been brainstorming everything that you’re hearing about LLM today is still in the brainstorming phase. We’ve been brainstorming about using the data in The Graph for two ways. One way we can use this data that’s trustworthy is by… I’m going to get a little technical here about how LLMs work. This data could be used to condition a search on an LLM. So for example, here’s something that you could try today.
If you were to go to ChatGPT and you were to copy in an article that you believe to be completely accurate, and then you were to say to ChatGPT, “I would like to ask you a question and only provide me an answer if the information is in the article I gave you. If you do not know the information, tell me you don’t know, do not make up anything.” And you’ll see that it will play this game with you. It can then accept natural language, you can ask it very flexible questions about the data that you’ve copied and you’ve given it, I’ll say context with, and it’s not going to make a… I mean, I don’t want to say it’s not going to, I’ve experimented with this and when I’ve done this, it has not made up information. It’s only pulled information that it had.
Similarly, what if we worked with a project like Geo? And Geo’s Focus is on getting very accurate information into The Graph, and some of this information won’t be just blockchain information. What if, for example, we used Geo to collect white papers from every web3 project and got that data into The Graph? And then what happens is someone could go to The Graph and they could send us a natural language query and that natural language query we could detect, oh this is about some other protocol. We could then pull the information about that protocol, give context to the large language model and then only get information related to the context that we’ve given it so that way we can minimize the chances of getting wrong data. So this is one way that I’ve been thinking about how we can use the trustworthy data that we are collecting in the protocol. The other way that I’ve been thinking about is potentially using it for training neural networks. So right now, these models go out they scrape the entire internet for data.
This data is free, but it’s unreliable. And in fact, even if companies wanted to pay for the data that they’re scraping, there’s no convenient way to even pay for the data during the scraping process. What if our data could, by default, it’s possible to pay in GRT for the data that’s read from The Graph? What if we provided a way where we had trustworthy data that could be read and a convenient way to pay for that data?
Well, obviously there’s a lot of exciting things to look forward to as it comes to The Graph and artificial intelligence. I think one thing that throws people off is ChatGPT. It sort of just showed up on the scene and I imagine a lot of people think well, that must have been very easy to build, very easy to deploy, and now we’re all just playing with it all day. I mean, a lot of work, a lot of investment and resources goes into stuff like this. Is that right?
Sam Green (00:52:04):
Yeah, that’s right. First of all, as we know now from what Tomasz was saying, we’re about 80 years into the era of AI, and only after 80 years have we gotten to this point. But then let’s focus specifically on open AI. What you may not know is that last year, OpenAI hired 1000 contractors to generate clean data for its system. So I didn’t tell all of the story when I said that it was just trained from random data that it scraped from the internet. No. OpenAI has basically built a moat of very clean data that they’ve gotten by hiring these thousand people to come in and make sure things are very clean. So for example, they’ve a 600 of those thousand contractors are paid to write clean code. This code isn’t being used to run ChatGPT. No.
It’s being used to train, ChatGPT on how to generate clean code. So this is a moat that they’re paying a lot of money to dig, and you haven’t heard about this because of course open AI with their ChatGPT, they want it to seem like magic. They want you to show up and just use it. Of course, why would they go into all the devil and the details of how it’s built? So of course it makes sense, I’m not trying to criticize them for that. But I just want to set expectations and let people know that there is a lot of new opportunities ahead of us, but many of those opportunities are going to be challenging. And right now what we’re trying to do is understand what’s going to have the biggest impact in The Graph, and we’re balancing that with how much fresh data do we need to generate to get to these new capabilities?
Tomasz Kornuta (00:53:47):
This is an excellent point. Data curation is a very important process in machine learning. Aside of that, those transformer based models such as GPT are very data hungry to bring some of the numbers that maybe will enable our listeners to build on the intuition for different ChatGPT variants. They range in the number of wide number of trainable parameters, from 1 billion to 175 billion, and those are huge numbers. When I was leaving Nvidia, actually my colleagues from a DLR trained the megaphone touring model that at that point captured our big attention from the media. It was like a monolithic transformer English language model with 540 billion parameters. This model has 105 layers and was actually, you needed a super computer to train it. It was trained on media eject super bot based saline, a super computer. Cost of training of that, those were tens of millions of dollars. Actually, luckily it was Nvidia, so we’re training it on our hardware and our super computer. So those were like let’s say Nvidia green dollars, not the regular dollars. But still the costs are just mind blowing.
It’s worth to take this, to remember there’s price, we need data and crypto industry is just discovering AI. Machine learning research is in general is based on benchmarks, but based on data sets, based on well established metrics, well formulated problems and so on, and we are lacking all of that. We want to take the best advances in AI, try to apply them here. It’s going to be a very interesting exercise. I’m excited, but I know that there will be… It’s not will just apply to ChatGPT, but which just works. No. We’ll need some applied research and work.
Sam, I want to ask you this question about the future of artificial intelligence from the perspective of crypto. Do you think crypto can reach its full potential mass adoption, people using it in their everyday lives without something like artificial intelligence?
Sam Green (00:55:58):
That’s another tricky question. So I view AI as a tool to increase the efficiency of my labor. For example, you guys have all seen the AI generated art. Each one of us could paint pictures like those AI generated are outputting, but how long would it take? It would probably take me a week or two, maybe a month to get to what I can generate in 30 seconds. I can imagine anything, and I can see it in 30 seconds. So this is the power of AI and what we’re going to see is that same power is going to be brought to crypto. We’re already trying to do this today with the tools that we’ve released for the participants in The Graph with the Allocation Optimizer and AutoAgora and this is what we are going to do with future tools that are going to be built on LLM. So can crypto get to the ultimate place with without AI? Sure. How long would it take though? Maybe 100 times longer, 100 years or 1000 years, I don’t know.
But certainly AI is going to accelerate things in crypto, and what I think is going to be the biggest accelerant is whenever we can get to the point, let’s focus on The Graph. If we can release a tool, for example, that could work where anybody could come to us and say, “An example I’ve been using is I would like to see the ETH to USTC ratio over the past 24 hours and I would like to be able to accept… We would like to have a tool that could accept that query in any way. What’s the ETH to USTC ratio or what’s the price of ETH right now?” And I want to be able to let anybody use The Graph. So really what I’m trying to say is I’m looking for an, oh my gosh, sort of capability that we can bring to users. I want to blow people’s minds with the tools that we provide, and that’s going to accelerate adoption.
Ani, I think it would be a lot of fun to get this vision for what makes you excited about the future of artificial intelligence. We’ve already heard from crypto Twitter and we’ve heard from mainstream media with things like ChatGPT. You work in the field every day, I’d love to know what makes you excited about the future of artificial intelligence?
Anirduh Patel (00:58:11):
There are two sides of it for me. There’s the practical side and then the researcher side of me. The practical side of me sees AI as a tool. I would love to see all the creative ways people are going to use this tool to improve the world around us, improve other people’s lives. The researcher part of me is just interested in the math really. There’s some real problems in multi-agent reinforcement learning, which is my field, which a lot of them at this point in time seem almost intractable. They almost seem like we don’t even know how to solve them. But I have confidence in us, I think there are a lot of people way smarter than me working on the problem and together, I’m sure we’re going to figure something out. So I’m interested five, 10 years from now to see how we look back at problems that we thought we would never overcome and see how we got to where we are.
Tomasz Kornuta (00:58:53):
Following up on some of those Ani thoughts. I really am excited by seeing ChatGPT, but I think that some people are trying to use that as a replacement for some of the existing technologies, let’s say search engine. And it simply works in a different way, and we need to learn how to use it and for what purposes those tools are good for. Similarly, how we learn how to use search engines like Google in the past. So you have to figure out how to use keywords, how to polish them, how to combine them and so on. I think one of the great example how people are learning how to use those new tools from coming from the generative AI is presented in an article from the Wire from library where the out is saying how image to text generators are changing the way he works and that graphic designers actually became, I love this term, co-creator.
Spending countless hours playing with various prompts and so on. Looking at perfect picture and actually what you’re observing, the painter right now became like a prompt engineer. What is fascinating for me is just already that there’s this huge field that I merged within the last two, three years, which is called prompt learning, prompt engineering and I checked recently, it’s quite well established topic on February 23rd from engineering, if you returns almost like more than 1400 results and from learning is more than 2100 of results, which means there are almost like 4,000 papers that focus on those problems. Can you imagine how that exploded? Just only prompting. So this is really where research is right now, hot in those areas where people are learning how to use those tools. I’m super excited. I cannot imagine what will come up next.
How about you, Sam? What makes you excited about the future of artificial intelligence?
Sam Green (01:00:44):
I’m going to answer, give a personal answer and a professional perspective. From a personal perspective, I’m studying Spanish. I’ve been studying Spanish for about a year now, and I think a lot of listeners know that ChatGPT, when you use it, it’s multilingual and you can actually put it into a different language context. And basically, I have a Spanish teacher but I’m starting to use ChatGPT as my second Spanish teacher, and it can talk to me at any level I want. It can give me examples, sentences. If I say, “Hey, I need to dumb it down.” It’ll do that. It’ll dumb it down for me and it’ll make it easier. So I’m just excited for these emerging tools so I can use them. You guys have all been hearing me, I’m very excited about the amplifying powers of these tools and I want to adopt them in my day-to-day life.
And then of course, I know I speak for the team here, we’re all excited about figuring out okay, what with these emerging capabilities, specifically with LLM, can we do in The Graph? And I think it’s going to be a highlight for our careers to see these tools once we start deploying tools based on these techniques in The Graph. It’s going to be very exciting for us.
Ani, for listeners that want to learn more about artificial intelligence, and you mentioned a few resources already available at Semiotic Labs website, what are some other things that listeners can read and further familiarize themselves with this important topic?
Anirduh Patel (01:02:10):
Yeah, so I think the best way honestly, is to just have conversations with someone in the field. Even me being a researcher in the field, dedicating 40 plus hours of my week, 40 hours minimum let’s say, to the subject, I’m always behind on something. This is why we have other experts in the room, people that can bring perspectives and bring expertise that I don’t have. I don’t think it’s possible for one person to keep track of it all. Outside of that, I would just recommend sources like MIT News, generally sources that aren’t going to sensationalize AI news. Not every new algorithm is a terminator, and not every new algorithm is going to cure cancer. Most algorithms are just small incremental improvements on what we already have.
Well Ani, Tomasz, Sam, thank you so much for joining me today and shining a bright light on a topic that’s captured so much interest, not only online and in mainstream media, but certainly within The Graph ecosystem. And huge shout out to the team at Semiotic Labs for all the contributions when it comes to artificial intelligence and optimizing the protocol. I’m very excited to see what this team does here into the future. Tomasz, for people that want to stay in touch with Semiotic Labs and learn more about the things you’re working on, what’s the best way to stay in touch?
Tomasz Kornuta (01:03:21):
We got several book posts presenting on our work. Asy mentioned on our website, one of the good ways of catching up. What we are doing, we’re trying to keep them up to date. We also gave several talks here and there. We gave a talk during Devcon in Bogota. Sam and Ani organized a workshop during called Incentive Mechanism validation during Dev Connect in Amsterdam. We’ve got recordings from those, they’re on our website. Of course, we’ve got some tweet handles that we’ll share with you. And if you’re really interested and would like to work on AI related topics, just shoot us an email at [email protected].
Please support this project
by becoming a subscriber!
CONTINUE THE CONVERSATION
DISCLOSURE: GRTIQ is not affiliated, associated, authorized, endorsed by, or in any other way connected with The Graph, or any of its subsidiaries or affiliates. This material has been prepared for information purposes only, and it is not intended to provide, and should not be relied upon for, tax, legal, financial, or investment advice. The content for this material is developed from sources believed to be providing accurate information. The Graph token holders should do their own research regarding individual Indexers and the risks, including objectives, charges, and expenses, associated with the purchase of GRT or the delegation of GRT.