Zac Burns Software Engineer Edge & Node The Graph GRT

GRTiQ Podcast: 09 Zac Burns

Share on twitter
Share on telegram
Share on reddit
Share on email

Episode 09: Today I’m speaking with Zac Burns, a Software Engineer with Edge & Node working on The Graph. Our conversation covers a variety of interesting topics, including a detailed discussion of the recent Scalar announcements at The Graph, how Delegators actually secure the network, the most common misunderstandings he sees discussed in The Graph community, and the role of a Fisherman in The Graph.

The GRTiQ Podcast owns the copyright in and to all content, including transcripts and images, of the GRTiQ Podcast, with all rights reserved, as well our right of publicity. You are free to share and/or reference the information contained herein, including show transcripts (500-word maximum) in any media articles, personal websites, in other non-commercial articles or blog posts, or on a on-commercial personal social media account, so long as you include proper attribution (i.e., “The GRTiQ Podcast”) and link back to the appropriate URL (i.e., GRTiQ.com/podcast[episode]). We do not authorized anyone to copy any portion of the podcast content or to use the GRTiQ or GRTiQ Podcast name, image, or likeness, for any commercial purpose or use, including without limitation inclusion in any books, e-books or audiobooks, book summaries or synopses, or on any commercial websites or social media sites that either offers or promotes your products or services, or anyone else’s products or services. The content of GRTiQ Podcasts are for informational purposes only and do not constitute tax, legal, or investment advice.

SHOW NOTES:

  • Fisherman (Article)
  • Smart Contract (Link)
  • Scalar (Blog)
  • Decentralized App – dApp (Link)
  • Edge & Node (Website)
  • Yaniv Tal (Twitter)
  • Curators (Blog)
  • GraphQL (Link)
  • Migration (Blog)
  • The Graph Protocol – Roles (Blog)
  • Community Dashboard: Graphscan.io (Link)
  • How to Select an Indexer (Doc)
  • Delegators Secure the Network (Video)
  • Ethereum EIP 1559 (Article)
  • Ethereum Gas Fees (Link)
  • Define a Subgraph (Blog)

SHOW TRANSCRIPTS

We use software and some light editing to transcribe podcast episodes.  Any errors, typos, or other mistakes in the show transcripts are the responsibility of GRTiQ Podcast and not our guest(s). We review and update show notes regularly, and we appreciate suggested edits – email: iQ at GRTiQ dot COM). The GRTiQ Podcast owns the copyright in and to all content, including transcripts and images, of the GRTiQ Podcast, with all rights reserved, as well our right of publicity. You are free to share and/or reference the information contained herein, including show transcripts (500-word maximum) in any media articles, personal websites, in other non-commercial articles or blog posts, or on a on-commercial personal social media account, so long as you include proper attribution (i.e., “The GRTiQ Podcast”) and link back to the appropriate URL (i.e., GRTiQ.com/podcast[episode]).

00:22
So, when I first joined The Graph, I honestly had no idea what was going on. I had never used Ethereum. I didn’t know what a smart contract or a D-App was. But I could tell one thing, and that was that either these guys were crazy, or that they were onto something that was going to be really big.

01:11
Welcome to the GRTiQ Podcast. Today I’m speaking with Zac Burns, a software engineer at Edge & Node, our conversation covers a variety of interesting topics, including a detailed discussion of the recent Scalar announcements, how Delegators actually secure the network. The most common misunderstandings Zach hears when speaking with members of The Graph community, and a new topic not yet covered in any previous podcast, the role of fishermen in The Graph, I started the conversation with Zac, asking how he first came to be involved with The Graph.

01:49
When I first joined The Graph, I honestly had no idea what was going on. I never used Ethereum, I didn’t know what a smart contract or a D-App was. But I could tell one thing, and that was that either these guys were crazy, or that they were onto something that was going to be really big. And I’d be able to figure out within the first six months, and if they were crazy, I just jump ship. So that’s probably like not the best state of mind getting into this. But when you see that there’s a once-in-a-lifetime opportunity that only comes up once-in-a-lifetime by definition. So, if you’re not able to, you know, take a small risk and find out what that is, then that opportunity might pass you by. Fortunately, during the interview, I got to share with Yaniv my philosophy about how the story of human progress is one of increased efficiency over time. You can track the well-being of civilizations with questions like how much effort does it take to have light in the evening for one hour, once upon a time, we chopped wood for heat and light. And that took a lot of effort, you know, things moved on to candles. And you know, eventually light bulbs and cheaper and more efficient light bulbs with more efficient sources of power. And these innovations have real impact on real people. I want it to be working on those pivotal moments in history that bring light to civilization. So thankfully, Yaniv understood that we had an alignment that I thought might be there. But I didn’t really fully grasp… yet! So, if you would ask me now, what would have brought me to The Graph, knowing what I do now, I would say that a big part of having an efficient civilization is creating incentivized systems that coordinate independent actors to engage in pro-social behaviors and bring value to their community. And so that, to me, is what The Graph is all about. It’s about creating tokenomics that bring people together to solve problems, reward the value that those people contribute, and keep the bad actors from taking advantage of that system. It’s basic when you boil it down like that, but it’s also very exciting.

04:11
So, what was it about the project that made you think either everybody’s crazy, or this could be something big?

04:17
The whole idea of Blockchain and about what The Graph does sort of presupposes that you understand Blockchain and that problem space. So, it’s a lot for someone to figure out in a short amount of time about whether or not they’re really solving a real problem. It turns out that they are in fact, one of the first things that I noticed when joining the company that really validated this decision is that people were knocking over the door trying to get in dried to deploy subgraphs. And because that everyone in Blockchain actually has this problem, but until you understand Blockchain Then a lot of that can be kind of foreign.

05:03
Can you describe what it is you do it Edge & Node?

05:06
My official title at Edge & Node is software engineer. At Edge & Node. All the software engineers wear a variety of hats though, if you were like to try and choose one area that I focused on what makes me different than other engineers at Edge & Node, I would say that my focus has been on decentralizing the network. There’s a lot of aspects to that. Decentralization requires security. Right, so I’ve developed the Proof of Indexing. To make sure that Indexers create a shared artifact that makes sure that all of them are indexing all of the data in the same way. Also, for security, I developed the Fisherman service, which verifies queries are being served correctly by Indexers. And if not raises a dispute on the chain. Decentralization also requires there to be query fees. So, I created Agora, which is a programming language that allows Indexers to express the price of a query. Also, for query fees, I developed Scalar, which enables high throughput micro transactions at the scale that’s necessary to support the queries in the network. Decentralization requires choice. So, I implemented our Indexer selection algorithm which collects statistics about Indexers. And when a query is received from a consumer, it looks at the consumers preferences, whether those be for speed or economic security, or price or data freshness, or whatever, and matches that query to the best Indexer for the consumer. To help provide the network participants with the best choices, I’m also working with Semiotic AI who received a grant from The Graph foundation to optimize interactions between many independently interacting agents. I’ve only been with this team for a little over a year. And those are just some of the highlights. So, you can see that Edge & Node keeps software developers pretty busy. In fact, I’d say that there are so many different opportunities surrounding the network, that I find that a part of my job is turning down lots of those opportunities, just make sure that we’re focusing on what’s going to be the highest value opportunity right now.

07:19
It’s remarkable, all the different things you’ve been involved with, I want to talk a little about Scalar. That made a lot of news when it was announced. I’m curious if we can approach that question in two ways, a technical way, and a non-technical way. So, from a non-technical perspective, what can you tell us about what made Scalar so important?

07:38
Scalar makes the network work. If you want to have paid queries, well, there’s nothing else that will actually get it to work without the other without it being like, you either wait a really long time for getting your response back, or it’s going to be insecure, or it’s going to be prohibitively expensive, or you know, these other problems. Like if you have those problems, no one wants to use The Graph.

08:01
Alright, that makes a lot of sense. So, can you answer this question in a more detailed or technical way, what made Scalar so important for The Graph?

08:11
Scalar was a collaboration between The Graph Foundation, Edge & Node and Connext what Scalar enables his routing query fees via microtransactions, from consumers to Indexers with low latency, which means that happens very fast, and at scale, which means that you can have a lot of them. So, describe how this solution benefits the network, I need to get a little bit technical, unfortunately, because this is a technical solution to a technical problem. But I’ll try to tie the technical analysis to benefits that are accrued by each participant in the network. So first, the problem that Scalar intends to solve part of the network security model is that microtransactions support queries from consumers to Indexers. Doing this on a layer one chain like Ethereum is a non-starter. Doing anything on Ethereum cost gas, which is pretty expensive right now, compared to history. So, using that for a micro transaction is going to be very inefficient, which is kind of an understatement, actually. Another problem is that Ethereum takes about 13 seconds to mine a block and that’s the only confirmation that an Indexer would get that a transaction has taken place. So, if the micro transaction in an Ethereum block is a dependency of your serving a web page with some data that you’ve got from an Indexer, that 13 seconds is an eternity to wait and see your data. No one’s gonna stay on a webpage for that long that the traditional way to solve these problems is with something called state channels and basic state channels. Let two parties make many microtransactions between themselves and then settle the final balance on chain periodically. Sounds kind of perfect for the problem that we have. But it’s not quite, at least not in the forms that have been explored before Scalar. In fact, it took almost two years of research with some of the world’s leading experts to reach a state channel design that could accommodate the needs of The Graph, the scale of The Graph tends to take technologies and just push them to their limits. And in some cases, past their breaking point, as was the case with state channels. The basic idea behind state channels is this: I hand you a signed message that says, if you serve me a query, you can have a small fraction of a GRT, you then serve the query back in that message, sign it, and hand me that message back. And this process is repeated, you pass the message back and forth, until you’re done interacting, and you know that at the end, you will get your fractions of a GRT. Because that sign message is backed on chain by, say 10 GRT, which would account for the many queries that have taken place over a period of time. So here is the first problem that exists for our network when taking that basic state channels design and applying it. Let’s say that a gateway has a query, it produces a signed message and hands it off to the Indexer. Now the same gateway receives another query that wants to sign-off on a new micro transaction. But the Indexer now is holding on to that sign message, it’s their turn to update it and send it back. So, you need another signed message. This is bad, because creating and maintaining that sign message that’s backed on chain with real GRT is really expensive. In this context, there’s a setup process involved, which involves communication between both parties. And while you’re setting one up, and collateralizing, the message with GRT, the user is all this time just waiting for their page to load. Furthermore, the gateway has to back up that message with some GRT, even though they already have another message that the Indexer is holding on to that has GRT backing it, now they just need another one with the traditional state channels design. This is really wasteful, for the gateway from a capital utilization standpoint, to have their liquidity partitioned into many different signed messages being held by these different participants like different Indexers. That waste and the liquidity that’s just being locked up, and these messages, has a real cost, that can be calculated, and would then have to be passed on to the consumer. Right. So, state channels, if you just take the basic design and apply it is going to be slow and expensive. And it actually it gets worse from there. I’m not going to try to get into too many details. But there are questions like you know, what happens if you send the Indexer a message, and it just doesn’t come back? Right? Now what? It is a distributed system, which is going to be susceptible to all kinds of networking and system failures. So, if they don’t send one back to you need to make another message and figuring out what happened there has overhead of its own. And do you lose that that 10 GRT, that’s back in the message, it just when you’re talking about hundreds or 1000s of participants interacting at scale, this breaks down. So, the way that Scalar approaches the problem is actually very simple. At its basic, I’m going to oversimplify a few things. But the basic idea is that when you set up that first signed message, instead of having one signed message, you’re basically getting an infinite number of state channels that are held in a pool, that all share the same collateral on chain, they’re all backed by the same 10 GRT that they can draw from. And that has a lot of second order effects for the efficiency of the system. So that problem where I had, I had sent you a signed message, and the Indexer was holding on to it, and I get a new query, now what I can do is I can just draw from my near infinite supply of messages and send you that one. And then these two sign messages can be updated in parallel without having any additional setup costs. Furthermore, if maybe the Indexer drops a message, or you’re not really sure what state it’s in, because it’s a distributed system, and maybe the gateway sent it over and the Indexer didn’t find it or maybe the Indexer sends it back and it’s just lost… These sign messages because they don’t have any collateral tied to them specifically, but they all share the same collateral, they’re disposable, you can just drop one and grab a new one whenever you want. And then whatever happened there about who dropped the ball, you don’t need to figure it out. And what happens is just that the latest state ends up being resolved into the final state. So the Indexer will collect the latest version of all the signed messages that they have, and collect all the query fees that way, so it just kicks the problem down the road, there’s a lot more benefits that come from how Scalar interacts with vector and that when we have a lot of participants in the network, it means that a lot of these interactions will happen through a router so that an Indexer, even if they have collected query fees from many different participants, which are backed by different amounts of collateral on chain will actually be able to submit only one final message to the chain that withdraws all of those very fees that they’ve accumulated from many different participants on to just one single transaction. And that’s very efficient, which is important for saving those gas costs for the Indexer. So, we don’t have to pass those gas costs off to the user. There’s a lot of more benefits and a lot more technical analysis, I encourage you to read the blog post, which will be in the show notes if you have the time.

16:35
So, before we move to the next topic, maybe we can quickly summarize what the benefits of Scalar are, in more simple terms?

16:42
The essence of what we did with Scalar is that if you want the network to be secure, you have to have paid queries. And if you want to have paid queries that are fast, and not prohibitively expensive, then you need something like Scalar, or the network just doesn’t work. Because if it costs $10, in in Ethereum, gas fees to serve a query for a webpage, no one is going to be visiting that webpage. Or if you can’t serve queries securely, where the Indexer isn’t going to have guaranteed payment, then that Indexer isn’t going to serve queries, because it would end up losing money, there would be no incentive later for the consumer to pay them if they don’t have to. Right? So, you also need web pages to be served very quickly, or The Graph is irrelevant in the context of the web. So, you really need all of these things and Scalar is what enables that.

17:46
You bring up gas fees, which is always a going concern in the Delegator community. And I’m sure the larger ecosystem, I’m curious how you think about gas fees, the gas price in

17:57
Ethereum, has risen a great deal over just the last year. This is explainable with basic economics, there’s a limited supply of something. In this case, it’s a limited supply of the number and complexity of transactions that can fit into a single block in Ethereum. But there’s also at the same time and increase in demand for those transactions in a block that’s driven by an increased interest in decentralized applications, increasing prices, the mechanism by which a free market ensures that the limited supply of transactions is utilized efficiently by the users who need them the most. In some sense, this is a good thing, right? The price increase is a strong signal from the market, the demand for Ethereum transactions. It’s a measurable, observable effect of the adoption of decentralized applications. And this is good. But on the other hand, the higher prices drive out many legitimate uses for Blockchain. Right? High prices affect the network because the long tail of smaller Delegators are often among the casualties or kind of driven out by people who are making other kinds of transactions. So, there is an EIP and Ethereum called EIP-1559. And that’s supposed to help with this problem. And I think that it will help a little and that’s going to come fairly soon. I think it’s in the next hard fork, which is called London, but it’s really a band aid on the actual underlying problem. The real problem is that there’s not enough supply or Ethereum transactions to meet demand. And the only way to solve gas fee is to enable the widest variety of decentralized applications is just scale out supply of transactions. There are many kinds of scaling solutions. Lots of people working on this Ethereum 2.0 is one, LazyLedger is something that I learned about recently and it’s also very interesting… So, help is on the way for gas prices, and I hope that we can increase the supply of transactions too.

20:18
GRTiQ Podcast is made possible by a generous grant from The Graph Foundation. The Graph Grants Program provides support for protocol infrastructure, pooling, DApp, subgraphs, and community building efforts. Learn more about The Graph.Foundation. That’s The Graph.Foundation.

21:45
I’d like to ask you a question I’ve always been curious about. It’s this idea of how Delegators secure the network. What is meant by that? How do Delegator secure the network?

21:55
None of my opinions are necessarily reflective of edge and node and I’m just one member of edge. And there can be many correct answers to this question. So, when a gateway receives a query, it needs to select an Indexer to serve the query to on behalf of the end user. One of the factors that our gateway considers is the amount of delegation that Indexer has done when you delegate to an Indexer you send an important signal to the network and any gateway that’s operated by the community that this Indexer is worthy of receiving a higher query volume. That signal is essential because a Delegator can consider many factors that are not necessarily visible on chain when deciding whom to put their capital behind. For example, a Delegator may see that an Indexer created a great tool for the community. This observation informs the Delegator that the Indexer has the technical capability to index well, and that the dedication to the network required to provide an excellent service on a long-time horizon. But none of that is visible on chain. Right, that observation was made by the Delegator so or, you know, maybe a Delegator may value diversity. And they may vote that Indexers come from a diverse set of backgrounds, or, you know, whatever is important to the Delegator they may see for some reason that an Indexer is motivated to serve correct responses. And that would add to the security. What’s important here is that it’s the wisdom of the crowds that’s going to shape the network into what they want it to be. We can already see evidence on chain and off that Indexers are responding to this market signal, because they want to get that delegation. Their response raises the bar of what it means to be a great Indexer in the network, not just for security, but across all metrics that the community cares about.

24:01
I appreciate that answer. What advice do you have for Delegators when it comes to selecting an Indexer?

24:08
I can tell you how I approached delegation personally. But I think it’s important that each Delegator combat the problem from a diverse set of angles. But first I have a minimum bar for delegation, any Indexer that I delegate to must have both submitted a Proof of Indexing and collected at least one GRT from query fees. It’s not much but there’s an enormous amount of work going from zero to one to set up the infrastructure, learn the protocol stake on chain, fully index, subgraph. And finally, you know, serve that first query. It’s a whole lot less work to go from 1 to 100 GRT than it is to go from zero to one. That’s my minimum bar up from there. What I’ve tried to do is to pick a different reason for selecting each Indexer that I’ve delegated to. For one Indexer, it was performing well in the test net. For another it was that they built a useful tool. When subgraphs started migrating over to main net, I paid attention to see who responded to that in the first 24 hours. I’ve looked at the Indexer profiles and The Graph portal, and picked one that had a profile that resonated with me. And my beliefs. Once I looked at who was collecting the most query fees, because I know how the Indexer selection algorithm works, I know what it means to be selected. And, and that’s…that’s a difficult thing to do is to rise to the top to be good across the many metrics that Indexer selection basis if decision on. So, I know that they must be doing multiple things, right. And if they had the most query fees, there was one case where I delegated, where someone was asking questions about developing sub graphs on the discord. And at first, I just thought that this person was hopeless. And we were going back with these questions we didn’t seem to be getting anywhere. And over time, but I was wrong. I saw that over the course of a few months, that this individual, they kept at it. And they grew until eventually they became a very competent Indexer. And seeing that transformation and left an impression on me. And I’m interested to see what this person does in the future. And I decided to delegate to them. Right? So, the answer seems kind of all over the place. But I try to look for something outstanding in every Indexer that I delegate to knowing that each one of them is going to be bringing their own unique value to the network.

26:52
Are you concerned then that many Delegators make the selection of an Indexer based on projected or estimated APY?

27:02
That is somewhat concerning, but it’s very temporary. And I think that if people think about it a little bit more, that they’ll realize that that’s not necessarily even a good strategy anyway. Because if everyone looks at Indexers API, and they pick the one that is the highest at the moment, well, your delegation is also going to get piled on and someone else is going to pick that Indexer as well. Until the API between the different Indexers is just balanced, it goes to this equilibrium. So, by following this strategy, you didn’t even necessarily get an advantage. You just kind of balanced out the API of the pool, right? Over time, that doesn’t work. I think that the reason that people are looking at API right now is because the network is early. And it’s really honestly hard to make good decisions about who a good Indexer is, until very recently, as sub graphs are now being migrated over to main net or real, that’s going to provide a lot more interesting data points, or a lot more things that an Indexer can look at and research when they decide who to delegate to.

28:20
So, with all the recent migrations, and the many more that we are expecting, would you advise Delegators, who made the decision of which Indexer to work with primarily on APY to reevaluate their decision?

28:34
That depends. So, I’m fairly confident in the decisions that I made when delegating. But if someone has made the decision to just follow APY, I would consider reevaluating that for sure, because there’s so much more information now that will be available to you, then when you made that decision. Right. That’s one of the fundamental parts of making decisions is that over time new information becomes available, and you need to adjust course, based on it’s gonna be a tough thing for a lot of Delegators. Because it seems like many of the community dashboards point to APY as the primary driver of which Indexer to select. What advice do you have for Delegators in this regard?

29:21
So, I think it’s important to consider that the API that you see on a dashboard is not necessarily indicative of any actual expected returns, that you would be getting by delegating to an Indexer. For one, it only takes into account the so-called Indexing rewards, right? From submitting a Proof of Indexing. There’s nothing in that dashboard that can tell you how much that an Indexer is actually going to collect in query fees. And really, the whole point of the network query fees so I don’t think that that’s going to be the primary source of gain that a Delegator could accrue. So, if you’re just looking at that, then you’re not only not contributing to the network, and in the best possible way, but you’re also probably making the wrong decision financially.

30:16
As such an informed Delegator, I’d be curious to hear your opinion about if it’s a red flag, if an Indexer doesn’t close their rewards allocation within a 28-day period?

30:28
I think that would be a red flag, I would wonder why that was happening. Because the Indexer should be incentivized to close the application. After 28 days, there’s no reasonable reason unless, I mean, there can be problems that the Indexer is experiencing, which may prevent them from closing the allocation, like if they don’t have any ETH on hand, and are having difficulty having capital to run the Indexer. That would be a red flag to me, because I wouldn’t want to vote and say, serve queries to this Indexer, which has very low capital and can’t afford to run their infrastructure. That would not be securing the network, in my opinion. So, if that happens, you know, try to find out what’s up. I know that a lot of Indexers, you can contact them on our discord, we have a Indexers and Delegators communication channel where you can find them and you can ask them any questions. And you can ask them specifically why they didn’t close that allocation and see if you were satisfied with that answer.

31:31
So I want to ask the converse of that question. Which is, is it then more favorable to work with Indexers that close their allocations frequently, maybe even daily?

31:43
Not necessarily. In fact, I don’t think that losing allocations too often is a sign of them, not necessarily operating efficiently. Because there is no reason to do that. It just seems like burning money. To me. The reason that we allow allocations this day open for so long is so that you don’t have to close them, so that you don’t have to spend money to close that allocation… in ETH, though, I don’t think that it’s going to be very typical. Now what I would want to see when looking at an Indexer is that they’re closing and opening allocations that are responding to market conditions about sub graph. If a new sub graph is deployed, and they need some of their stake back, I would like to see them close an allocation and open two…with one with that new sub graph in it, and one with the previous graph that they had, each with an appropriate amount of stake. Right. So, the closing and opening of allocation should let them balance their signal. We’re receiving queries on different sub graphs in response to changes that are happening in the real world. And if there’s no need to balance that I would like them to hold their allocation open for as long as possible.

32:59
What do you say to Delegators? Who worried that their rewards or GRT might get slashed? If they partner with a bad Indexer?

33:07
This is a misunderstanding that I see comes up quite a lot actually. That would be I think, useful to clarify, there are a few risks that are associated with being a Delegator. But being slashed is not one of those risks. As a Delegator, if an Indexer behaves maliciously, and say they serve an incorrect response to a query, you will not be slashed, you will not be held accountable for that. And the risks that are there are more in terms of opportunity costs. So, there is a delegation tax, where…when you delegate about I believe exactly half a percent of your GRT is burned, and then the rest goes to the delegation. And once you want to and delegate, then you are subject to a 28-day bond period. So, your main risk is that maybe you never get that half a percent back. And you need to keep your liquidity locked up in the contract for 28 days. But as far as risks go, that’s really low risk. It I mean, it’s capped, right after that half a percent. You don’t stand to lose any more GRT.

34:27
I’d also like to hear your thoughts about the 28-day thought period. You’re so informed about the network, and you’re also a Delegator. So how do you think through that issue?

34:37
This is a little bit of a difficult question. I mean, so the 28-day bond period is there for a reason. And I’ll just explain what that reason is. There is an economic attack that we call Delegator front running. What that is, is that because of the way that the contracts are implemented on the chain, in order to make things efficient, when an Indexer closes an allocation, we don’t look at how long that a Delegator was delegated when that allocation is close to in determining the rewards, all of those Delegators are rewarded equally. So, the Delegator front running attack goes that I see that an Indexer has held an allocation open for a long time. So right before they close it, maybe after 27 days, I’m going to submit my delegation, wait for them to close it and then delegate to another Indexer, and then just kind of loop around and try to collect rewards at a much more accelerated rate than would be reasonable if they were participating in the network as intended. And so, what the 28-day thought period does, is to prevent an Indexer, from being able to switch allocations any more often than that maximum allocation length, which means that they have every incentive to actually to just secure the network instead of play silly games. That being said, ideally, that 28-day fine period is a response to a problem and not something that’s necessarily desirable in itself, right? It’s not making the network more efficient. You could argue that if we were going to secure the network, it would be good for an Indexer to be able to respond quickly to changes that are happening, right? If I see that an Indexer is behaving poorly, and I want to switch to another one, maybe being able to do that quickly, is actually advantageous for the security of the network. Maybe not, because if you’re seeing that, that Indexer is behaving poorly, then when you undelegate, they are immediately losing that delegation. Right. So maybe it’s not important that that that Delegator who made a poor choice, be able to make another one quickly again? Maybe it provides an incentive for a Delegator to actually do some research before committing that amount for an amount of time? So, I think that there are arguments on both sides. But we have Graph Improvement Proposals. And we have a forum where people can debate about these topics a lot. And we take these issues very seriously. By we, I mean, the whole community takes these issues very seriously, and is trying to make the best network that we can. Now given the realities of the situation.

37:39
Hi! This is Zac with Edge & Node and I’m a software engineer at The Graph, my conversation with the GRTiQ Podcast, that’s been helpful, please consider supporting future episodes by becoming GRTiQ.com/Podcast for more information, GRTiq.com/Podcast. Thanks for listening.

38:06
With the migrations of sub graphs to the main net, is it reasonable for Delegators to expect their rewards earnings to be more based on query fees and less on Indexer rewards?

38:18
Yes, absolutely. So, it’s important to understand the phase that the network is in right now and what the function of indexing rewards is. So as the network is bootstrapping, right, right now, there’s only a select handful of sub graphs that are migrating. But we know based on what’s on the hosted service, that there’s potential for a lot more of the sub graphs to migrate over and for there to be a lot of gray fees. But at the very beginning, you have this kind of chicken and egg problem, where no one is using the network because there’s no Indexers. And no Indexers want to index and serve query fees, because there’s no users, right. So, in order to bootstrap that the indexing reward is added to get Indexers to be able to have a source of income before the volume of queries scales in the network. Right. So the plan has always been that this is more or less temporary in terms of it being the dominant source of income, but that over time, the that proportion of income is going to be shifting toward query fees, indexing rewards are never actually going to go away, because they still do provide their own value, which is that Indexers can cross check that they are doing the indexing correctly by comparing their Proofs of Indexing. And that ends up securing the network because later as we have provable query responses, from taking a proof of indexing and a response, you can show eventually, that the response that you got is the correct response. By comparing these two things. Once you have that, then even just providing a proof of indexing is going to help secure the network. Because the more of them that you have the more sort of stamp of approval you have on all the queries that have been served from these proof of indexing values. But really, the point of the network is queries. And I think that revenue is going to be shifting heavily in that direction over time.

40:34
I’d like to ask you some questions. And have you defined some terms or concepts that Delegators may not fully understand? So, let’s start with GraphQL. How should we think about what GraphQL is?

40:46
QL and Graph QL stands for query language. And it’s just a standard for making queries where a consumer can ask for some subset of the data that may exist on the server, like they may ask for an entity. And some particular fields on the entity. They don’t want to know everything about it. But they only want to view some subset of the data. Having that kind of a query language makes things more efficient, because it means that less information may have to be transmitted over the wire. So, it comes to you faster, and would be read from the database more quickly. And all of this by enabling you to just ask for specifically what you need, instead of saying, Give me all of the information. So, it was invented by Facebook. It’s an open standard now and it’s used across a wide variety of web API.

41:46
I’d like to hear how you define or think about what a subgraph is. A subgraph

41:52
takes information from the Blockchain, which is arranged over time, because that’s kind of what how a Blockchain is structured processes, the data that is receiving over time into some other data, that is more like the information that you actually want to get out of the system. So that kind of sounds jargony. But example. We’ll take like crypto kitties as our information layer, it’s a well-known theory and phenomenon that happened where you can trade crypto kitties, and people make these different trades. And kitties have different attributes like might be red, or some I don’t really know much that much about it. Let’s say that some are red kitties, and some are big kitties. It’s small, I don’t know. So, you take all of that information, or the sub graph receives all of that information over time. And maybe you have some question like for every user, based on how long that they’ve held a particular Crypto Kitty, What color was their favorite? overtime, and that information is not actually available on-chain, but you could take all of the data on-chain, process it down, you know, just filter out the information that’s relevant to you and then store it in a different way. So that you can answer that question very quickly, in a way that would be relevant to a web application.

43:22
Can an Indexer index more than one sub graph? And can a sub graph be indexed by more than one Indexer?

43:30
Okay, so the first question is, Can an Indexer index more than one sub graph and can a subgraph be indexed by more than one Indexer? Both are absolutely true. So, with the case of a subgraph, being indexed by multiple Indexers, this is essential to decentralization and essential to the reliability of the network that if one Indexer disappears, for whatever reason, they decided they’re not interested anymore, or it’s not been profitable for them, because they don’t have that they haven’t made the right business decisions about how to run their infrastructure or an earthquake destroys their facility. Whatever reason, an Indexer disappears. Well, there are going to be other indexes that are already set up where traffic that’s coming from consumers can be routed to those many other Indexers. And that’s really one of the primary benefits of decentralization: is reliability. Another benefit of decentralization is consumer choice. You don’t have to be locked into using one specific Indexer who then could if they form a monopoly around a subgraph, then they wouldn’t necessarily be incentivized to have the best service. But the way that the network works is that the consumer sends a query to a gateway, and they have preferences about what they value, and they send those preferences over to the gateway. So, for example, a consumer may say that I really need this query to be served quickly, I’m serving a webpage that’s really important. Or they might say, I really need this query to be served at the lowest price. Because I’m doing analytics, I need to make a lot of queries, and it’s going to be expensive. So, I need you to find me the cheapest Indexer, it doesn’t matter if they’re really fast or anything else, you might have an Indexer, who has is doing financial analysis. And they may say, I need an Indexer, that is going to give me great economic security, they have every incentive to make sure that their response is correct and up to date. You may have a consumer that says, I need the very latest response on chain, because I make an A game. And that’s important to my users to be able to see changes that are happening in that game right away and not stale data, which would make the game look incoherent, if things are changing, but maybe another end, consumer doesn’t care about that, right? So, there’s all these different needs. And they send those needs with their query to the gateway. And then the gateway has been interacting with all of those different Indexers it’s looking at, I sent you this query a few minutes ago, how long did that take to come back? I’m looking at what kind of steak do you have on chain? That’s what were your economic security is coming from if you stand a lot to lose, if you’re slashed, then we say that you have a lot of economic security, we’re looking at is the Indexer. Reliable? Do they drop query sometimes, which is going to create problems for your users? We’re looking at lots of factors, actually, the delegation is that is something that we’ve talked about as well, which tries to roll in many factors that we can’t necessarily observe programmatically, in our gateway. So, we’re taking all of these factors into consideration with hundreds of different Indexers across that may be indexing the same subgraph and choosing the best Indexer and the best service for that consumer. And that’s really the primary function of the gateway.

47:13
How do you think about the role of curators in The Graph?

47:16
A curator is kind of like what a Delegator is to Indexers, a curator is for subgraph. Personally, I think duration is one of the most difficult roles in the network. That just might be because of my particular skill set. Other people might find that it’s the best for them. But what you’re doing when you are curating is signaling to Indexers that a subgraph is worth indexing. And so, this needs to take into account a lot of different information from a variety of sources. So, for example, a sub graph may be worth indexing, if you expect it to have a lot of queries, a lot of consumers that care about the information that that subgraph is producing that you need to have market knowledge of what is happening in the greater crypto space. What are the important projects, right? Where is the demand for this data going to come from which sub graphs make the most use of that data from those important projects, you also need to have maybe technical and coding skills to be able to take a look at the subgraph like how it’s actually implemented, and make sure that it doesn’t have any conditions, or any kind of bugs, where it’s going to serve wrong data, or maybe go into a failure mode that is not recoverable and not be able to serve consumers in the future. So, you need to evaluate not just the broad crypto space, but also the technical implementation of the subgraph. And again, it’s really all fitting into this model of adding security and value by providing market signals. This time, the market signal is, or Indexers because it costs them a lot of real-world resources, like computing hardware, and being able to pay for a Ethereum data and all of this in order to index the subgraph. So, they need a signal that says you should do this. And if you do, you’ll be rewarded, and not it with some terrible condition, like no one cares about the data, or you won’t be able to serve the data after you’ve spent all this work. Right. And that’s kind of a different skill set than what the Indexer actually may have, which is to actually run the infrastructure, right? So, these are another of just two complimentary roles in the network.

49:47
So, this is a GRTiQ Podcast first. Earlier you referenced the role of Fisherman in The Graph. What is a Fisherman?

49:56
A Fishermen is an altruistic role that helps to secure the network, the basic idea is that it tries to catch Indexers in the act of serving incorrect responses to queries. Edge node operates one Fisherman. But anyone from Delegators to competitive Indexers can participate as well. If you make the same query to two different Indexers and get different responses, someone is lying. And you can bring the evidence on chain or a dispute. And once we determine who that is, then an Indexer will be slashed, right? So, if say that you want to add some kind of probabilistic security to the network as a consumer or a Delegator, you could say, one in 100 of my queries, I’m going to perform again with different selection criteria, hopefully get a different Indexer. compare them and submit a dispute if I need to, and then you’re acting as a Fishermen and helping to secure the network at a very low cost to you if you’re only doing it a one in 100 times. But with those probabilities of something happening one and 100 times over many, many queries over time, the chances would be really high for an Indexer, who was behaving maliciously, to get caught? And then to have a penalty for that.

51:16
Can a Delegator be a Fisherman?

51:19
Yes, any role in the network is complimentary to any other role. And if you feel that you would do great in both roles, by all means, anything that you learn in one role is actually going to increase your competency in the other roles as well. So, I’d encourage people to participate in as many roles as they’re interested in.

51:39
For the final question, I’d like to know if there are any common misunderstandings that you come across, as you’re talking to other members of The Graph community, either in the discord or the Telegram group?

51:51
The Graph is complicated and is technical. And so, it’s not surprising sometimes that I’ll be reading things, and I find some misunderstandings, it’s great to have the opportunity to be able to clear some of these up. So, one misunderstanding that I see come up quite often is that the network is not going to be viable. Because the price of GRT right now maybe like a, I’m just gonna make up a number $1. And that’s way more than the value of a query. So how is it ever going to be that consumers are going to pay a full dollar for their queries like this network, it can’t possibly work? And so the answer to that is that like any ERC 20 token, that unit, when we say one GRT, it’s actually divisible into many more pieces, in fact, it’s divisible to 18 decimal places, so you can pay per query down to the low price of .000 there are 18 times and then a 1 at the end of that, and that’s a point 00 there’s are many zeros, and then one dollars day, is a very small amount of money. So that doesn’t actually pose any real problem for consumers, when they’re paying for queries, they can pay very small amounts of money for a query, regardless of the conversion rate, or GRT. And in fact, when a consumer makes a query at the gateway, the Indexer, when they’re sending over their prices can take into account that conversion rate. So, everything is dynamically adjusting to market conditions all of the time. And so that’s not really a problem that we run into. Another misconception that comes up that I see a lot is just the question about slashing, right? Indexers asked this Delegators asked this, can I do X, Y, or Z? Will I get slashed? Every time that I’ve heard that question? The answer has been No, you will not get slashed. And I think the reason that that comes up a lot is that in a lot of networks, you have to be very sensitive for security reasons that even something as simple as inaction can become a security vulnerability in some scenarios. So those kinds of networks apply slashing very liberally to make sure that none of these potential security vulnerabilities exist. Those kind of questions don’t exist in The Graph network. So, we’re really on the other side of that, where if you’re going to get slashed, you were acting very maliciously, though, the two reasons that this can happen. Again, there’s only two an Indexer provides an incorrect proof of indexing, or an Indexer provides an incorrect response to a query. And even if that does it happen… Safeguards are in place to protect Indexers in case that that happened not maliciously, but say because of a bug. In The Graph software, so there’s arbitration that happens on-chain, and that will allow us to be able to dig into what actually happened to see if an Indexer was acting with malicious intent. So really, slashing is not something to be concerned about unless you’re criminally negligent or malicious. And that’s, again, never has never been the case. Whenever I hear someone asking about slashing.

 

YOUR SUPPORT

Please support this project
by becoming a subscriber!

CONTINUE THE CONVERSATION

FOLLOW US

DISCLOSURE: GRTIQ is not affiliated, associated, authorized, endorsed by, or in any other way connected with The Graph, or any of its subsidiaries or affiliates.  This material has been prepared for information purposes only, and it is not intended to provide, and should not be relied upon for, tax, legal, financial, or investment advice. The content for this material is developed from sources believed to be providing accurate information. The Graph token holders should do their own research regarding individual Indexers and the risks, including objectives, charges, and expenses, associated with the purchase of GRT or the delegation of GRT.

©GRTIQ.com