GRTiQ Podcast: 171 James Hendler

Today I’m speaking with Dr. James Hendler, the Director and professor at Rensselaer Polytechnic Institute and a pioneer of the Semantic Web. For those in the tech and academic circles, Jim Henlder is a name synonymous with transformative changes in how we interact with and understand the web.

This is a truly enlightening conversation with Jim! He brings a rich tapestry of experiences, having worked at places like DARPA and on foundational projects that have shaped the internet and artificial intelligence as we know them today. During our conversation, Jim talks about his early interests in technology and growing up in New York, his extensive professional journey through the early days of AI and working on the Semantic Web with the likes of Tim Berners-Lee, and his insightful views on emerging technologies like web3.

Jim shines a light on a lot of aspects of technology development and application, reflecting on the evolution from early AI research to today’s state of the field. We’ll explore his significant contributions to the field, including his work on the Semantic Web, his time at DARPA, the early days of AI, and his thoughts on the future of the internet.

The GRTiQ Podcast owns the copyright in and to all content, including transcripts and images, of the GRTiQ Podcast, with all rights reserved, as well our right of publicity. You are free to share and/or reference the information contained herein, including show transcripts (500-word maximum) in any media articles, personal websites, in other non-commercial articles or blog posts, or on a on-commercial personal social media account, so long as you include proper attribution (i.e., “The GRTiQ Podcast”) and link back to the appropriate URL (i.e., GRTiQ.com/podcast[episode]). We do not authorized anyone to copy any portion of the podcast content or to use the GRTiQ or GRTiQ Podcast name, image, or likeness, for any commercial purpose or use, including without limitation inclusion in any books, e-books or audiobooks, book summaries or synopses, or on any commercial websites or social media sites that either offers or promotes your products or services, or anyone else’s products or services. The content of GRTiQ Podcasts are for informational purposes only and do not constitute tax, legal, or investment advice.

SHOW NOTES:

SHOW TRANSCRIPTS

We use software and some light editing to transcribe podcast episodes.  Any errors, typos, or other mistakes in the show transcripts are the responsibility of GRTiQ Podcast and not our guest(s). We review and update show notes regularly, and we appreciate suggested edits – email: iQ at GRTiQ dot COM. The GRTiQ Podcast owns the copyright in and to all content, including transcripts and images, of the GRTiQ Podcast, with all rights reserved, as well our right of publicity. You are free to share and/or reference the information contained herein, including show transcripts (500-word maximum) in any media articles, personal websites, in other non-commercial articles or blog posts, or on a on-commercial personal social media account, so long as you include proper attribution (i.e., “The GRTiQ Podcast”) and link back to the appropriate URL (i.e., GRTiQ.com/podcast[episode]).

The following podcast is for informational purposes only. The contents of this podcast do not constitute tax, legal, or investment advice. Take responsibility for your own decisions, consult with the proper professionals and do your own research.

Dr. James Hendler (00:00:14):

If we move away from particular technology and talk about that issue of privacy control and decentralization, and ways that communities really can somehow interact in a way that it’s not necessarily owned by some mega company that’s trying to commercialize it, can we recapture the early web? Probably not. Can we recapture some of the anarchy of the early web? Maybe.

Nick (00:01:16):

Welcome to the GRTiQ Podcast. Today I’m speaking with Dr. James Hendler, the director and professor at Rensselaer Polytechnic Institute, and a pioneer of the Semantic Web. For those deep in tech and academic circles, Dr. James Hendler is a name synonymous with transformative changes and how we interact and understand the web. He brings a rich history of experiences having worked at places like DARPA and on foundational projects that have shaped the internet and artificial intelligence as we know them today. During our conversation, Jim talks about his early interest in technology and what it was like growing up in New York, his extensive professional journey through the early days of AI and working on the Semantic web with the likes of Tim Berners- Lee.

(00:02:01):

And we talk about his insightful views on emerging technologies like Web3 and crypto. We explore only a sliver of his contributions to the field and some of his insights from the early days. But I’m grateful for the opportunity to speak with Jim, someone who has earned a rightful spot in the history of technology, the emergence of the internet and AI and so much more. I started the interview by asking Jim about what it was like growing up in New York at a time right before the dawn of modern technology.

Dr. James Hendler (00:02:32):

The real thing about growing up in New York in the ’60s was actually the ’70s, which was when I was old enough to start understanding politics and politics was starting to become a big deal. But actually I think the most interesting thing about growing up in New York in the ’60s was it was considered relatively safe. So I had a lot of freedom of movement. I took a course at the Museum of Natural History in Manhattan. I lived in Queens. I had to take an hour in the subway on my own. I think I was nine years old, 10 years old, something like that.

(00:03:06):

So there was just tremendous opportunity being in New York. There was a lot to do both locally and globally or from my little place in Queens, Manhattan was far away. And then as I grew older, I had friends from all over the place. I went to school in Manhattan, so I had friends from Brooklyn and Queens and the Bronx and New Jersey. It was a really eclectic sort of scene. And looking back on it that may have shaped some of how I approach the world is very generic.

Nick (00:03:40):

What types of things were you interested at the time and do you remember early contact or early thinking about technology?

Dr. James Hendler (00:03:47):

Yeah, so I was probably more interested in science than in technology pre-high school. I also was very good at math. So of course the things that come easy to you or the things you think you’re interested in. I was doing math stuff high school, I hit my first computer. It was very rare in those days that a high school had a real computer rather than some little thing we had, the room-sized entity where you punched your cards and you submitted them and got your printouts and all that sort of good stuff. So that’s when I discovered that computers hated me. I was doing well in the course and I was offered an extra credit assignment, which nowadays we would call a bubble sort, but back then they didn’t have that name for it. It was just could you sort this group of numbers in some good way.

(00:04:38):

I had a variable named A and I punched the card A equals blah, blah, blah. Ran it through the card reader got my printout. It said B, undefined variable. Okay. I tried re-punching the card. Eventually I did the whole stack again where the variable A had been renamed B and I got C undefined variable, at which point we ran out of time. I don’t know if I became a computer scientist in revenge or… Because when somebody hates you, you want to make up with them. But it was about 30 years later when I understood overflow errors and I could actually figure out what had been going wrong.

Nick (00:05:18):

That’s amazing. So for listeners that may not comprehend or understand the state of technology that you were experiencing as a young person growing up with this early interest in it, how would you describe it?

Dr. James Hendler (00:05:32):

So by then it had been used in military. If you’ve seen the movie about the women who worked for NASA, the computer that was coming in there, that was maybe five, six years, maybe a little more than that earlier. So we were probably on the second, third generation of machines, and again, it’s just hard for anyone who understands a modern to understand what technology was like and just how much it changed over the years. The other thing I think that got me interested in computing was that it was changing. So other fields do too, but there wasn’t anything new in my AP calculus class that hadn’t been in calculus for, well, I don’t know, 150 years or whatever at that point. But in computer, it was like no one knew how to do any of this stuff. It was pretty much all student run. The smart guy in my class was this guy named Eric Lander, who went on to become one of the most famous scientists in the world.

(00:06:39):

He eventually became president science advisor. I believe he’s still the most cited scientist ever because he was the guy that did the genome problem [inaudible 00:06:48]. So in my high school, I was probably in the top hundred, which at most high schools I would’ve been one of the lab people. So that’s always helped keep me a little bit humble knowing that there were smarter people than me. But also it was a little tough when I was outside of school because then it was like, I want to show off a little bit. I’d like to have my teenage years over again. But of course, so would everyone else in the world.

Nick (00:07:16):

And the movie you’re referencing there is Hidden Figures, a great movie came out in 2016 and definitely one I’ve seen. So a great recommendation. So Jim, I want to ask you this question before we talk about university and all those experiences and decisions that shaped where you ended up. But what’s it like thinking back on your childhood and your growing up years, the state of technology, like you said there, a lot of this was new, it was constantly changing and updating to where you are today. Not a lot of people are able to experience that early opportunity to work on it and to see what it has become. What’s your perspective on that?

Dr. James Hendler (00:07:56):

Well, it’s funny, as just recently I joined the Facebook group called Internet Old Farts, and to be able to join the group, you had to prove that you were… say something that would prove it. I actually submitted a photo of my 300 board acoustic coupler from my college years. So now we’re sitting here talking live, Broadband. 300 board was not like you could download movies and things like that. I could tell a lot of stories on that. But the one that I think best captures how fast this stuff has changed is, so when I actually graduated from college, that was ’78, I went to work for Texas Instruments, which had one of the first industrial AI group. So Dirac’s Park had had a research group in AI. Some other companies were starting to get interested. This was the beginning of what became the ’80s expert systems years.

(00:08:57):

So I was working at Texas Instruments and one time I needed to go and get some information from the main computer system there. The guys I was working with were in a little office off of the disc farm, which was the largest industrial disc farm, discs where these spinning platforms, they’re still called our drives discs and things like that. Some of them do spin, very few anymore. And people still remember the little cards and things. I’m talking, these things look like phonograph records to half your audience who doesn’t know what those look like. The room was the size of a football stadium, with every one of these machines, multiple stats of these things running. This has more memory.

(00:09:48):

In fact, one day I was actually just taking all of the little USB drives that I had gotten over the years given to me as swag from different conferences, and I was just throwing them in a bag and realized my bag helped more memory than the entire world when I was growing up. And now the size of the things in that bag 2G, this, now even those are laughed at. So the speed of change with so many different things, and then what that’s enabled in both society and science and things like that still remains staggering to me, even though I’ve lived through it and lived through it multiple times.

Nick (00:10:37):

At the time, did you feel like you were in the pioneering years of something that would go on to become what it is? Or did it feel like, “Hey, this is probably as good as this gets this football stadium sized room and all these disks. This is the state of the art and it’s probably as good as it gets.”

Dr. James Hendler (00:10:54):

You’re going to force me to tell the other half of the story. So the reason I was there was because I was working in an AI group, as I mentioned. I was probably the junior most person in the group. We decided as a language project that we wanted to get all of the patent abstracts that Texas Instruments, who I was working for at the time had. So they had 400, 500, whatever number of patents it was, so that we could start playing with them for some natural language projects. So I went to this guy who could get me that information, and he actually had to punch job control cards and run things and eventually got a tape and we had to wait two days until the tape was given to him by the guys in the back room and whatever. And then we went over to my lab and we were actually using the first commercial Lisp machine.

(00:11:50):

Lisp machines were, they weren’t exactly personal computers. They were big and they were shared, but they were aimed at one program or at a time as opposed to time-shared machines. Lisp was the language of AI at the time. There were two companies that made these things, and eventually Texas Instruments decided to enter that market and didn’t take it over, but hoped to. But anyway, so we were sort of that transition. So I took his tape and the guy came with me, and so the first thing I did is I went into the computer room and put the tape on our reader, and he was just stunned like, “You can do that.” I’m like, “Yeah.” Then I read the tape into our machine. So I had not as much disk space as that whole disk farm, but I had a pretty big disk for those days.

(00:12:40):

I hate to think what it probably was now, but I could hold all the patented effects, which has stunned him for… And then I said, “Oh, damn these were all uppercase. Well, here, let me write a little program.” So I wrote a program to say, if it comes after a period, leave it capitalized, otherwise make it lowercase and that would be good enough. And I ran that and I just plugged it in and ran, and this guy was just sitting there with his jaw on the floor. And for me, this was everyday computing. So no, what I felt like is that I had gone back in history when I walked into the disk farm that we were making the future, but I don’t think I was thinking yet of… that that was going to be a continuing path. So I wouldn’t say I thought that the machines would get better, that the stuff we did wouldn’t get more powerful.

(00:13:32):

I knew that was coming, but I don’t think the speed, the change, and then really it was the networking that made the big difference. And again, that gets into a whole other set of anecdotes if you want one. Good, I figured. So most people have heard that the internet grew out of something called the ARPANET. ARPA being the Advanced Research Projects Agency later named the Defense Advanced Research Project Agency, and you could join the ARPANET. Now, what people mostly don’t realize is in the first set of ARPANET stuff you needed to have a router, and a router costs somewhere around the $100,000 range. So you needed a big grant to be able to join the ARPANET. So really it was only the top universities and a few companies that really could afford it. Some of the big MIT Stanford, some of those universities have been given these things by the government as part of their grants at TI.

(00:14:40):

We eventually got on it when I was at Yale as an undergraduate, my advisor just didn’t feel it was worth the money. Interestingly, one of the things we were doing on the ARPANET was what nowadays would be called social networking. A very, very famous mailing list was called SF Lovers, I think it was at MIT.edu and SF lovers was sort of the science fiction fantasy. We’d send notes back and forth very much like, imagine a Facebook group except a message the size of the Twitter message filled your screen. But we could do screen folds. We couldn’t share pictures, things like that. We had to hide from the government. We were using this advanced technology for these personal things, and we even had a T-shirt. It was a picture of, I think it was a frog at a computer or something like that. Oh, it was an alien. It was like an alien at a computer, and you could only get it if you’re a member of the group.

(00:15:48):

And so if you went to a conference or something and wear this shirt, other people could come up to you and say, “Oh, you’re SF Lovers.” The fact that we were hiding what eventually became the driver of the networking as networking became cheaper, wider. I remember when there were 4 or 500 people on the entire email world, and we were saying just how great it was when everybody would be on email. I was just talking to my internet old farts group and would say, “Doesn’t everybody wish we were back in the days when there were only 200 people on email?” If you got 10 messages a day, you were one of the most popular people in the world.

Nick (00:16:32):

I want to ask you this question about that period of time specifically and getting access to that network. A lot of the way Web3 and crypto and blockchain, the way this whole industry is framed is it’s a return to what Web1 was initially supposed to be, that original vision of open and permissionless and censorship resistance. And then Web2 of course came along and sort of did what it did. So I want to ask you, as an OG, if you will, is that a fair caricature of what Web1 was? Was it a spirit, a vision?

Dr. James Hendler (00:17:07):

A lot of this gets down to a technical point that I’m very, very dogmatic again about, and most people are aren’t, and the way I usually describe it is I actually have a photo somewhere from a conference where Vint Cerf is wearing a T-shirt that said, “I did not invent the web.” And Tim Berners-Lee is wearing a T-shirt that said, “I did not invent the internet.” So there’s a lot of confusion about when the internet became the web, the internet actually started this very controlled entity, the ARPANET. People started finding ways to do these informal network things by essentially, my computer at midnight would phone another computer, pass an email that had them in the address would get it back, would then call the net. So this was UUNET, it was called. Your email address at UUNET was actually not a single address. It was essentially how you would send the message.

(00:18:11):

So if I wanted to send a message to somebody at, I don’t know, University of Texas, I might have to say Joe Smith @CMU, @Stanford, @Texas, so I’d have to know the path. Then eventually, of course, the routers put up. So the internet was blossoming, and there were a lot of things on it for finding information, and people have heard about these things now, Archie and other systems. But what really blew it open was when Berners-Lee came along with the web, and the web was actually a specific piece of software, just to show you Tim’s brilliance. People said, “Did he predict this thing?” And I said, “Well, he named this software World Wide Web.” That was his goal. And one of the key things is Tim was a very firm believer in open software, which was just starting to be a thing back then.

(00:19:12):

Rick Gabriel had just started to really use that phrase, and so Tim convinced CERN to actually allow the release of the code. Then when he moved to MIT to start the World Wide Web Consortium, a lot of what happened was to standardize some of those things. So what was HTTP was something Tim had thrown together, now it became a lot better, was HTML. People were saying, we need this, we need that. So you started to have a base. So I actually got involved with that world. I won’t say late, it was about ’91. Basically by then what had come out was something called LIBWWW, or what we all called Libwww, I get yelled at nowadays when I say www, because it just labels you as an old timer. What that did is anybody had access to a machine could install this code, set a few variables, and you had a web server.

(00:20:19):

So really you just needed permissions. And of course, in those days, permissions, we didn’t have the hierarchies of IT we have now, because again, when you had 20 users in your building who knew how to… And so we had an IT staff, and you would go to the IT staffer and said, “Hey, we’ve installed this, we did this. Can you make it available?” And I still remember one of my students, I really wish I could remember the year and the month, but there was a month where we had the most visited site on the entire web. We had a roller coaster site with pictures of some of the top roller coasters, and it got 200 hits. My home web page still is mostly handwritten HTML because there were no tools for doing it. And by the time the tools were around, I didn’t feel like redoing everything. So nowadays I just add things by hand, which of course means it’s usually very out of date. But the good news is now I don’t have to because Wikipedia or something else updates all my information. One of the things that’s nice is the longer you’ve been on the web, the more likely you are to have good search rank. And so at one point I was the top Hendler. I don’t think I am anymore because there’s authors and actresses and things like that on web, although I still typically show up pretty early in the search. Tim Berners-Lee topped me on that because he was the top search for Tim. You type Tim in your browser, but when we first started having completion, it was like Berners-Lee. Owning your first name was considered huge. Owning your last name was considered pretty good.

Nick  (00:23:12):

Well, there is a lot of information about you out there and any listener who wants to Google you will find out a lot of accomplishments, a lot of things that you’ve done in the course of your career. So, I want to return back to your personal journey a little bit here. So as you mentioned there, you end up going to the University of Yale. You choose to study computer science. So let’s talk about that. What was the drive and what was the vision for your career at that time?

Dr. James Hendler (00:23:36):

Again, understanding the landscape of the day, there were maybe 20, 30 universities in the US that offered a major in computer science per se. A lot of them were just transitioning from math to computer science or engineering to computer science. So none of the professors that I had in my early days had a degree in computer science, because there hadn’t been any. By the time I graduated from grad school, there were plenty of them. I think my graduating class had 11 computer science majors in it, which was the second-largest group of majors in computer science in the country. Because if I remember right, MIT had something like 16, right? I mean, so again, computers were fairly new and I was the only one in my group who went into this thing called artificial intelligence, because basically it was the only part of the field where you weren’t going to make money.

(00:24:36):

Because no one could figure out where this stuff would ever go, and you know we were doing stuff that seems so primitive now, but I was always fascinated by the computer. But then I took my first real computer science course, which was much more of a mathematical computation. I didn’t do very well because that’s not what I wanted to do with computers.

(00:24:56):

So I actually switched over to being a math major for a little while until I discovered what math is. Actually looked at becoming a technical theater person because I was doing a lot of that for fun, but then when I got the two D’s in technical theater, I thought maybe that’s not the field for me. And so I moved back to computer science. And by then Yale had hired a young computer scientist named Roger Schank who went on to become a very big name in AI.

(00:25:28):

And he had come from Stanford, and I’ll tell you the day I became an AI scientist, I’m sitting in Roger’s class, he’s talking about something, somebody asked him a question and Roger just looked at him and said, “I don’t know. What’s your guess? Because your guess is as good as mine.” And I’m like, that’s the field I want to be in. That’s a field that’s growing, that’s new, that’s different. By the 80s, again, we were just the beginning of the expert systems age, and I had just finished graduating. So I decided I would take a few years to decide what I wanted to do because I got a job offer for what at that point was a very large amount of money. Nowadays a high school student wouldn’t work for it. But I moved to Texas, I joined this group. We were considered sort of the weirdest group in all of Texas Instruments, which was an accomplishment.

(00:26:20):

Lots of other stuff going on. But essentially during that time, I realized I wanted to be making advances in the field. I didn’t want to just be using this stuff. So I decided to go back to grad school, and I had a year from when I decided I would go to grad school to when I could actually apply. So I did a master’s in cognitive psychology during that time because AI and cog psych had been very linked at Yale in the days I was there.

(00:26:50):

And so that was a background. I worked with somebody who had been an early HCI pioneer. So computers and humans together became the thing I was most interested in. And that grew into, you know, I went back to grad school. Actually, I was accepted by a professor who, when he was a visiting professor at Yale, is the only person that ever gave me a B in an AI course. Everything else was an A.

(00:27:15):

But he was also the smartest guy I had met. And the most interesting in a lot of ways, this was Eugene Charniak, who unfortunately passed away this past year. You could fit everybody doing AI in a decent sized lecture hall. The International Joint Conference on Artificial Intelligence in 1977, I think it was, was at MIT. There were about 250 people there. Nobody could believe there were that many people at an AI conference. You know, it was a growing field, and I was very lucky because I was at the front edge of that. So you know, the year I applied for jobs, I sent out you know, 15 resumes, got back about 12 interview offers. I think I ended up with seven or eight actual offers.

(00:28:04):

You know, I talk to people now who are just top computer scientists who were in their 40s and they say, “I sent out 75 resumes. I got back two invites, and luckily I got a job.” So I mean, the field grew and I just was lucky enough to have been there early, could watch the growth, and you know at some point I decided I’d like to be involved in also managing some of that growth. That’s when I went to DARPA. So now ARPA had been renamed DARPA.

(00:28:38):

DARPA had always had a mix of government people, sort of people from the contractors who built the military stuff, and college professors. And the professors have come in for a few years to sort of be the crazy people, and the culture had shifted a little before I got there. So there weren’t too many of the college types. And I realized very early in my time at DARPA that I didn’t want to be like everybody else.

(00:29:07):

I realized that you had to manage this whole organization’s portfolio, which meant you need some people who are transitioning stuff right out to the military, and some people who are doing crazy long-term stuff. I said, I want to be the crazy. So I had been working with my research group at Maryland at the time. That was where I started my academic career on, okay, now we have this web thing.

(00:29:31):

Let’s look at the web as a big knowledge base. How do we harness that? And we had some demos and things like that. So when I came to DARPA, I said, why don’t I convince DARPA to put a lot of money into this? And long story short, we ended up with that happening and I ended up hooking up with Tim Berners-Lee. So he had this idea called “The Semantic Web.” I was using a different term for it, but I liked his better. I often describe it as, his view was this very large circle. My view was a much smaller circle. I was just lucky enough that my circle as a bubble on the edge of his made it a little bigger. So he was willing to talk to me. It took us a while at first because he just assumed I was a suit wearing DARPA guy, and I just assumed anybody who had invented the web would just surround himself by yes men, things like that.

(00:30:23):

It really was the first time I visited him in his group and saw them fighting with him and that he was surrounding himself with the smartest guys he could find that I did it. And then I eventually, one of his guys read a couple papers my group had read and went to him and said, “You got to read this. This guy actually knows this stuff. They’re doing it right.” We started talking and they sent me a proposal that was terrible. So I said, “Okay, I’m just not going to let anybody see this.” And I funded them. It was poorly written, I mean the ideas were great. Eventually became friends and went on from there and did the work together. And life has now moved us to living in separate countries. We still stay in touch from time to time.

Nick (00:31:08):

As most listeners will know, you went on to co-author a important paper called The Semantic Web with Tim Berners-Lee in 2001 for Scientific America. Before we kind of dive into some of that, I want to ask you a couple of follow-up questions. So the first one is for listeners of this podcast, AI just recently exploded on the scene. And it’s all chat GPT and it’s all these LLMs, and now it’s grown even further to include music and images and stuff. But clearly there’s a long history here. There’s an academic scientific government type of history tied to this. So what would you want any young, kind of newly on the scene listener to know about those early days of AI when you were studying it at Yale and kind of where the state of the art was at that time?

Dr. James Hendler (00:31:56):

I realized recently that I’ve been doing AI for a very long time. My first published paper was in 1977. You know, many of the things that the systems do now, many of the problems that they’re having with large language models and things like that. We had such a limited version of that, but we were still thinking about exactly the kind of issues we have today. So you know, a famous story from early AI days in the Shank lab was, John went into a restaurant, he ordered lobster, he ate and left. What did John eat? And getting the computer to answer that question was hard. So the other day I put that to chat GPT, and it said, “Lobster.” I said, are there other possibilities? And it said, “No, he ate lobster.” Well, actually though, what our hardest problem was was since it didn’t say what he ate, we would try to put in restaurant knowledge.

(00:32:55):

But if we put in knowledge that sometimes you don’t get the right order, sometimes this happens, sometimes that happens. If we said he left a big tip, it would be more likely ate lobster than if you ate a small tip. You know, the system is still struggling a little bit.

(00:33:12):

They don’t get it right. I mean, if you give it the right questions, that’s trivial to the modern technology. But those same things were going on over the years. So you know, you talk about the current internet and how we have video, but none of that stuff would’ve happened without the web, without image formats, without those things coming together, data formats. The Semantic Web was really all about making it so that data on the web could be linked to other data. So a webpage could point at another webpage, but there was no way my Excel spreadsheet… I couldn’t say, “I want the thing that fills this value to be what this other guy publishes here.” So a lot of what The Semantic Web was is just, “How do you do that?” And it sounds easy, but in fact the problem is, some of it’s with early database technology and is better now. But you know, if you have a field called C12:13, what the hell is it about? Right? The other thing is because databases were hard to change, a lot of weird things going on there.

(00:34:24):

One of my favorite stories is, I had a guy who was one of the deputy CTOs from NASA came and spent some months in my lab at Maryland. He got a reverse sabbatical so he could learn about this stuff. And he told me the story that the database for the Houston plant had a field for how much fence work they had. But technically, according to the data schema, it was in gallons. Reason is because they had taken a field from a different database, which was for their submersion tank, and just copied it over.

(00:35:05):

And they said, “Ah, we’ll just use that field.” And nobody ever updated it. But now imagine you’re trying to find that, well, okay, if you worked there you knew that. But if you worked anywhere else, it’s like, what the hell? Either I can’t find fences or, “Oh, I think they have this many gallons of water.” When you’re not talking about gallons, right?

(00:35:26):

And I mean, that’s almost an easy example. That is an easy example compared to many of the other data problems we still have to this day. Getting the data to the web also led to linking a lot of other things. So some of the social networking stuff came because once you started labeling images, you could start saying, “Well, how do these labels relate to each other?” But scale came in there. Right? So in the early days you’d say, “Okay, these pictures on Flickr can be related to these pictures using this term,” or something. And the problem is eventually that didn’t work very well, just there was too much. And then people started coming up with different models. Some weird guys in Stanford came up with this idea of hand encoding an entire, you know, why don’t we just make it so you can follow down these taxonomies and you’ll go from this, to this, to this. So if you want to find luggage, you go from, you know, basic idea to travel, to travel accessory, to luggage. Okay?

(00:36:30):

Eventually they realized there was a much better way to do it. Started with a company called Google, went from doing it by hand to doing it by a combination of techniques, invented an algorithm. I’ve probably gotten way off of the question you asked me before, but the key thing was all of this stuff was going on in parallel, right? And the web had been designed in such a way as to allow that.

(00:36:55):

So the big difference between the internet, which by definition, to this day, has to be managed. You can’t just go to the DSM, the main namespace DNS server, and just change the name of a computer there or something. You have to go through a lot of process. There’s money involved, which of course changes the world drastically from being very open. And so everything was decentralized, and the purpose of the search engines was to help you put it together.

(00:37:25):

And then social networking kind of grew out of people wanting share pictures. So okay, you know I put pictures of my kids up there. Well, if you knew my name was Dr. James Hendler, and you were somehow searching for me in these things, you’d find it. But couldn’t find it through “Jim,” right? So people started realizing we need better tools. And eventually those pictures became videos. I still remember very early MPEGs being exchanged.

(00:37:51):

You would send a video, say a five-minute video you would send through in a hundred pieces. And then you’d have to assemble them at the other end. That was on the UU net. By the time the web came along, people were figuring out how to automate a lot of that. So you just had all these different levels of stuff converging, and the web was the thing needed to make that come together.

(00:38:14):

And the most important aspect I think, and Tim never gets enough credit for this, was his realization that there was no way this thing was going to work unless he made it free and easy for people to join. In fact, there were a lot of people before Tim, or at least a few, who had very similar ideas on how to build the infrastructure, the equivalent of the HTTP part, but had not figured out how to monetize it. And you know, they weren’t going to try to launch something until there was money in it.

(00:38:49):

Tim intuited correctly that we’re talking about sharing information. We’re talking about people finding each other, we’re talking about humans. You know, if we’re going to do that, we’re not going to try to define everything. We’re going to have to let it grow pretty organically.

(00:39:11):

But of course, if it grows organically, then you start to get into the issues. You know, some of the things that were in our early days of what’s The Semantic Web about, now you get for free. But used to be, you know, how do I tell, we used to use Gates as an example, right? So if you did a search on Gates, of course the first thing you hit was, I forget her name, but the woman who played one of the nurses on Star Trek. Second one was Bill Gates. You know, a long time until you got to garden gates. Google eventually started saying, “Hey, let’s mix these things up.”

(00:39:45):

Other search engines came along, other techniques. Now I want to search for video. Okay, how do we label the video so it’s searchable? Well, we need tags on it. So what we now call hashtags started to grow along with micro-blogging. So again, early version.

(00:40:02):

Tim’s version of the web was a read-write entity. You should just be able to create a website by moving things around on the screen, hitting go and go. Right? And I find it very funny. Now, I’ve seen some demos from Chat GPT and people like that saying, “Look how easy it is to create this website if you do it this way.” And the answer is, that’s a about 30-year-old idea. But the technology wasn’t ready for it at the time. People didn’t yet have the desire.

(00:40:34):

And then you asked me about sort of blockchain and things like that. Well, a lot of what the Web3 stuff is about, again, there are different definitions of what Web3 is and things like that, but a lot of what people are talking about now is the fact that what made the web grow was the fact that it was decentralized.

(00:40:51):

I used to use the example very, very often when I was at Dartmouth trying to put this program together, somebody sent me an example in their proposal of how many cows are in Texas. And talked about trying to find the correct answer. And I ran it through you know, one of the very early search engines, and I found you know, 20, 30 answers. One of which is that there’s no cows in Texas. They’re actually alien symbiotes that have been sent in by flying saucers, and that’s why you shouldn’t eat meat. Okay?

(00:41:25):

So a radical Ufologist animals rights person, to this day, I have no idea whether that site was a spoof site or a real site or whatever. But here’s the thing, I realized that to someone else, you know to someone who had that belief system, someone like me who thought that cows were just these animals that were grown in barnyards and were there and people ate them, was just as alien.

(00:41:54):

And I started realizing we all believe different things, right? Even when there’s a large common core, right? Take my religious belief, or Surya religious belief. You know, if we went through writing down everything everybody knows about anything, we’d all end up hating each other. Which unfortunately is what eventually happened on social media. But that’s a whole different part. Now I’m happy to say I was very uninvolved in the growth.

(00:42:21):

Once you have large companies, then you start having ownership issues and leadership and management. And using Google today, try to find the radical Ufologist page from asking about how many cows are in Texas. You don’t want to ask chat GPT, it’ll just give you an answer. And I’m like, where’d that number come from? And I say, “Hey, I found it on such and such a site,” which sometimes you’ll click on that link and it doesn’t even exist.

(00:42:54):

They’re getting better. I will give them credit for having improved some of that stuff. So again, we watched it grow, we watched it take off, we watched it get taken over, and now what a lot of people want to do is re-decentralize it. But the question is, how do you do that? Blockchain is one potential way of doing it, and you know, the cryptocurrencies kind of show that might be possible.

(00:43:17):

Caused a lot of confusion, a lot of people don’t understand that blockchain is not necessarily tied to cryptocurrency. So they can’t tell the… You say blockchain, they think Bitcoin, and those are not the same thing. So some people feel you know, what you could do with blockchain is create these communities on the web that you know, can interact with each other. But on the other hand, that defeats the openness.

(00:43:43):

Kim is working on something called solid, and there’s a group of people trying to sort of formalize or create a standard around some of that. Which is, how do we give people back access to their own information? So could I have sort of an information bank where I could say, “I’m willing to share this kind of information with these kind of sites, and this kind of information with these kind of sites.” Instead of right now where it says, “Hey, if you click this button, you’re going to share everything in the world with us.” Right?

(00:44:13):

In Europe they’ve said, “Well, you’ve got two choices. You can click a button that says you are willing to agree with everything, or you’re not.” Right? I now go to lots and lots of pages that say, “Hey, you’re using an ad blocker, so we’re not going to show you our stuff because we make the money off of the ads.” Right?

(00:44:29):

And I’m like, “Yeah, but if I let you show me the ads when I click on them, you’re going to sell that information to a lot of other people, and I’m not going to get any money out of that.” Sometimes I’ll say yes, sometimes I’ll say no, depending on what I need. So the re-decentralization idea is how do we put people back in charge of the web?

Nick (00:44:49):

I want to ask you this question. It’s a little bit about history. So if we are going to ascribe ARPA as kind of a first mover on the internet, and then Tim Berners-Lee and the work he did as a first mover on World Wide Web, who would you like us to think about or who from your perspective, should we ascribe the AI revolution to?

Dr. James Hendler (00:45:10):

I actually have a talk I’m working on now where I’m trying to show how some of the early work we did led to what became Watson, the IBM thing that beat the Jeopardy players. Which convinced people that a different kind of machine learning than was being used at the time might be a good way to go. That eventually merged with some ideas from neural network research. And that really is part of what blew things open.

(00:45:39):

I could go into a lot of issues of why at one point neural nets were hot, then they went cold, then they went hot, then they went cold, then they went huge. But the answer was, you need staggering amounts of computation. So I remember a conversation I was having with some people in 2017 where somebody had presented how you could build this kind of what we now call foundation.

Jim (00:46:03):

Presented how you could build this kind of what we now call foundational or large language model and said, the only issue is you would need $10 billion and the biggest computers in the world. And we all sort of giggled, well guess what? We now have a group that’s suddenly got $10 billion and access to server farms and started with DeepMind doing some of the things they did that blew away people. I’ll tell a story about that in a minute. So essentially AI was evolving, but it was evolving exponentially. And when you’re on an exponential curve, you can’t really tell where the knee is until you’ve passed it. So again, with the web, I still remember by ’92 ’93, it was growing. In ’95, ’96 I think it was, you had comedians who were making fun of that WWW thing you see on everything.

(00:46:56):

By 2000 you were considered, “Why the hell aren’t you using the web?” Right? I think AI is going through something like that. You had companies saying, “We’ll never put anything on the web,” that now are major web companies. You have stories of a lot of the things that you now can go to that will help you find the cheapest book or the best this or the shopping sites. A lot of those, the big company said, “We’re not going to play. None of those shopping agents can come here.” And the little company said, “Oh great, we can team up, use this technology and compete with the big kids.” At which point the big kids said, “Uh-oh, we better play too.” So there was a lot of that kind of thing going on. So really you had these growths of a lot of different things and I don’t know who it is, who convinced whom, what. So DeepMind, really blew a lot of us away with what they did with Go playing.

(00:47:54):

So I actually had a book that came out in about 2016, 2017. First of all, the publisher didn’t want us to have AI in the title, we had to put it in the subtitle. So the book is called Social Machines. I was talking about how what we now call AI, you’re using it whether you know about it or not, and what does the future look like? What would it look like for a doctor to have an AI that could help them look things up or… We didn’t predict some of the vision stuff, but with Go, we actually had a chapter about why computers play chess so well and why they were so far away from ever being a Go player. So we wrote that chapter, by the time it went out for review, a computer had beaten the 835th best I think it was, Go player in the world.

(00:48:50):

So in chess from about 800 best to first best took a decade. So they said, “Oh, well, we had to rewrite the book and we added a little bit about, so the way humans…” By the time the galleys came, it had beaten the fourth best player in the world, who was the best player willing to play against it, and we had to completely rewrite. So the sudden growth of that stuff was a big thing. But what a lot of people don’t know is, so one of the things that’s a breakthrough in making that stuff work was what were called convolutional neural networks. I won’t try to describe what they are, but I will tell you is that in 1986, I was at a summer school and one of the guys at the summer school was a kid named Yann LeCun, I say kid, we both just finished our graduate degrees, and he was promoting this idea he called convolutional neural nets.

(00:49:48):

Problem is, you didn’t have the data, you didn’t have the computation… But he stuck with it and after 20 years he won. Some of us were much more backing symbolic AI and things like that, which has been blown away by these models. But now these model folks are saying, “Hey, we need that stuff again.” It’s sort of coming back because there’s still things we can’t do despite having all this AI power because we don’t know how to talk to the data. I just gave a talk recently on the history of the Semantic Web, which ended with, “The greatest thing about the history of the Semantic web is the big part of it hasn’t been written yet.”

(00:50:29):

A simple example I’ll give you, I can go to say, I don’t know, pick your favorite site, Wikipedia, and I can look up an author who I like and say, what are all the books in that author’s such and such series? I wouldn’t do it, but supposing you wanted all the books in the Game of Thrones series, okay, well now you want to say, “Okay, so I want to buy all of those books. Here’s the three I already own, so I want all the rest, and by the way, I’d like to get them sort of for the cheapest price. It doesn’t have to come from one place,” et cetera, et cetera. “Show me something that does that.” And ChatGPT can’t do that for you because it can’t go to those sites and look at those things. It could tell you all the books written by that author. If you ask it all the ones in a particular series, it may or may not get it right. Depending on whether that series has been specifically identified or sort of identified.

(00:51:24):

What do you do with something like, is Dirk Gently’s Holistic Detective Agency part of the Hitchhiker’s Guide to the Galaxy? Well, no, but… So again, recommenders, all these things come together, but they can’t be composed and the solution, is add in more and more stuff to the middle stuff, the ChatGPT stuff and you’ll win. And more and more people are finding that only works up to a certain level and that eventually you need the human in there who knows the specialty, who can edit it, things like that.

(00:52:03):

I think we’re in early days of AI. There were certainly misuses of it that scare me, but there are also uses of it that excite me and I think we’re going to see a lot of that changing. So you asked who had that idea? And the answer is, I can point to stuff Roger Schank was doing in that paper we published in 1977 that are still there in concept, but they’ve been replaced by mathematical statistics and those have been replaced by a different kind of neural statistics [inaudible 00:52:36] and now we’re moving to quantum computing.

(00:52:38):

So I’m sitting here trying to remember everything I learned in linear algebra. And then every time I finally think I’d figured something out, someone says, no, no, this is probabilistic linear algebra. And I’m like, what’s that? It didn’t exist back what I was thinking [inaudible 00:52:54]. So this stuff moves on. I don’t think we’re anywhere near the end point. I just don’t think it’s a threat because I still believe for the foreseeable future it’s going to be the combination of humans and AIs that are powerful.

(00:53:12):

I heard a talk recently, I wish I could remember the name of the guy who said it because he deserves the credit unless he stole it from someone else. But it’s, “You’re not going to lose your job to AI. You’re going to lose your job to someone who knows how to do AI better than you do.” If you’re a knowledge worker, I throw in. If you’re a construction worker, you don’t have to worry about AI anytime soon. If you’re a knowledge worker, then I suggest you start learning how to use the tool because it’s a powerful tool. And by the way, it’s now built into your text editor and your mail system and your this and that. So if you know how to use it well, it can really be a productivity enhancer and it’s not putting you out of the job. It’s in fact making you better at your job. And I actually think we may be seeing the day someday where people who work eight hours a day actually work eight hours a day instead of 18, because of the productivity of it. I think there’ll be things that obviously change things, but of course new professions will come along. Whoever heard of a prompt engineer a couple years ago. Now it’s one of the most profitable things you can get into.

Interviewer (00:54:25):

I want to ask a follow up, which is, as you look out to the future of AI then, if this is going to persist in time and only get bigger and more used, more adopted, what are some inflection points that you think we should think about or watch for as it evolves towards maybe greater adoption or greater impact?

Jim (00:54:47):

So let me tell you about a project we just did. I got called into the president of my university, I’m at Rensselaer Polytechnic Institute, and he said to me, Jim, I want to ask you a crazy question. We’re thinking about giving a posthumous honorary degree to Emily Roebling. Emily Roebling was the wife of Washington Roebling who was considered the builder of the Brooklyn Bridge, but it was really Emily who finished the project when her husband got ill. In the movie The Gilded Age, there were a couple of scenes where somebody played Emily Roebling. He said we could get her to come, we’ll have her give a speech to the grads.

(00:55:26):

And one of the things we do at RPI, we actually have our two or three honorands in a panel talking to each other. One of our honorands this year was Reed Weissman, who is an astronaut, spent six months on the International Space Station, is now the commander and, whatever they call it, the chief for the Artemis II space thing, which will be up and around the moon. So it’ll be the first time humans are back in solar orbit since the early seventies. Brilliant guy.

(00:55:57):

Now how are we going to have a conversation between a woman who died in 1903, but whose most famous work was in 1883? He asked me, “Do you think we can do this with AI?” And my answer was very easy. I said, “No, let’s do it.” And we put together a team because that’s what we figured out worked. You just go to ChatGPT and say, “How would so-and-so say something?” All it’s got is the generic sources. We went to our archives, we started finding articles and scanning those in and building that into our prompt engineering and things like that. We started getting things that were much more accurate to the reality, but we still wanted to capture her voice. And so our archivist knew that down at Rutgers, they had a lot of her letters.

(00:56:46):

So she went down there, took photos and xeroxes of many of those. We brought them up here. We had to translate them into something we could feed to the AI because sort of handwritten documents from the 1800s are not something that OCR works very well on. So they came up with the idea, they read the letters out loud, recorded that, fed that to an AI speech to text translator. And so now we had pretty good transcriptions of letters and then we picked and chose what we thought were the things that best showed off her personality, her personal style. And then we had someone from the family, a linear descendant. One of our faculty members was the great great granddaughter of his sister. So she was the sister in law’s descendant, but also knew very well the great great grandson of this couple who would provide information.

(00:57:43):

So she would look at some of the answers say, “That’s not how she would say this in public,” things like that. So we had to have a team of people. We had to have people who knew the family. We had to have archives that gave us real information about the person. We had to have AI expertise obviously. And I was lucky enough that one of our grad students who graduated, who defended his PhD in February was hanging around until May before he started his job. So I asked him would he mind doing it. I don’t know how many hundreds of examples we had to run and all the different things we tried. But in the end, we ended up with this brilliant conversation. Reed was such a great sport because they played off each other really well. And he knew that she had no ability… He could ad-lib, she could have. And he played off that well in our [inaudible 00:58:37]. And we had the person who had played it on The Gilded Edge, Liz Wisan came in full costume for the Colloquy.

(00:58:45):

For the graduation, we decided making her sit in the stage for two hours and that might not be nice. But she read the graduation talk that Emily would’ve given in the voice and in the style. It was brilliant and it was a great use of AI, but AI couldn’t do it. It took archives of real human stuff that’s not on the web. It took knowledge about the people we were talking about, about the events. It took a lot of fact checking and things like that. So at one point I said, I want to put in somewhere in the brochures that were giving out that, this was done by AI, but to be very responsible, we had humans involved, we had the archives, et cetera, et cetera, et cetera.

(00:59:35):

And there was someone who wasn’t sure they wanted that included. So I had to generate, “What did Emily say when she saw the UFO fly over the Brooklyn Bridge?” That gave us an answer in Emily’s voice that sounded very authentic. And I showed it to the people and said, this is why we have to do it. We can’t guarantee that everything we’ve generated was real. We’ve done our best to be responsible and things. So to me, that’s a long story. But it illustrates why in the foreseeable future, we still need that combination of human ingenuity, AI power. And now for this kind of project we needed history. But if you were doing something in biology or pharmacology, you need some of that knowledge because the systems will generate a lot of answers, not all of which are right.

(01:00:29):

And you need that human knowledge to figure out either how are we going to test the answers or what looks most feasible so we can go on, that kind of thing. How long is that going to go on? My belief is for a very long time. I also think there’s a looming issue that has been brought up by people much smarter than I am, but I’m beginning to worry more and more about it, which is as more and more of the text on the web that we’re feeding to the AI systems is generated by AI systems, the creativity that comes from the humans will get left out if we don’t build it in specifically.

(01:01:10):

So I was talking to the actress and we were talking a little bit about the difference between when she played the character on TV where she had a script, the goal of that script was the dramatic fit into the entire story, and ours where it was really trying to capture a person and their personality for real. And she said they were very different challenges and she agreed, the script writers don’t have to worry about AI taking over for a long time because it could get us the right answers, but it couldn’t generate the story. We had to use our ideas. What are the right questions to ask? Things like that. So very long answer because all my answers are very long today. Sorry.

Interviewer (01:01:59):

No, it’s a brilliant answer and I appreciate that example. It really drives home some very cool things that are happening and are possible with AI. So how about this question, if we zoom out 30, 40, 50 years, I don’t know what the number would be, is there any reason that there’s some sort of existential threat to humans as we get AGI or whatever that is when it arrives?

Jim (01:02:25):

It’s very funny. It’s right before we did this podcast, I was on a phone call with the technical policy committee from the Association for the Advancement of AI, and we were asking exactly that same question. One of the things I said is, “If AI gets to the point where it can be the thing that kills us all, I’ll actually be very happy because there’s so many other things that are so much more dangerous in such a shorter time.” I don’t think we’re going to make real progress in any kind of technological solution to climate change without AI’s health. Is AI going to kill us all? Well, maybe it’ll keep us alive long enough til winter can kill us all. But frankly, I don’t actually believe it’s an existential threat. I gave a TEDx talk one time where I said, “I’m taking a bit of a risk here because I’m about to argue with the smartest man in the world, but Stephen Hawking is wrong.”

(01:03:22):

Hawking’s argument was, “Once AI systems could build other AI systems, they would start evolving at a speed we couldn’t keep up with and they would eventually replace us.” And my answer is, “What niche do we compete for?” Evolution requires an niche. If I was a super smart computer that could generate other super smart computers, I’d be trying to figure out, how do we make something really small that can get the hell off of this planet and let us colonize the universe? So I’m not competing with humans. I don’t buy what’s called the paperclip argument. The AI may suddenly decide that the most important thing to it is something crazy.

(01:04:04):

What I will say, is someone once asked me, am I an AI optimist or pessimist. And I said, “I’m an AI optimist, but I’m not really necessarily a human optimist.” There are a lot of ways to misuse AI. I think we need regulation in that space. I think we need a lot of thinking. I think people have to really, really understand what it can and can’t do. At which point I think we start to see a society where AI is either used responsibly or when it’s used irresponsibly, people can recognize that and eventually do something about it.

Interviewer (01:04:44):

What application of AI makes you most excited or optimistic? We’re already seeing it in different ways and maybe it’s already happened, but I’m thinking things like medicine, maybe science, maybe law, maybe politics or social issues. But is there one of those that you think about that makes you kind of optimistic or hopeful about the impact that AI will have?

Jim (01:05:05):

What I believe is a lot of the biggest problems we face today cannot be solved by a single one of those things. So for example, COVID, the vaccine was very important, but at the same time, we had this political thing happening that was almost making the vaccine not effective. And meanwhile, there were parts of the world developing other vaccines using other techniques, and the people weren’t working together because of social issues and things like that. So a couple generations from now, you’ll want an mRNA vaccine for a particular variant of something or other, the computer will help you find that without any problem. But then you still have to test it because you’re only getting a probabilistic solution. And then you still have to convince people it’s right. And you still need to go through testing things of that, and you just have to go through real modeling and simulate.

(01:06:07):

So I actually believe that AI is going to become an invaluable tool throughout a lot of things, but where it’ll be particularly useful… So in the story I just told, we needed an archivist, a family member, a general AI specialist, and a ChatGPT specialist. That team was able to work with then a group of other people who were the ones concerned with how will we do the discussion. So the scripting team call it, many of whom had to learn a lot about Emily Roebling. And it only took us a few weeks to get something really cool. The students went insane, but none of that could have worked without all those groups.

(01:06:57):

I had a paper, I guess last year it came out of Nature Reports. It’s the first time I’ve ever got anything in Nature because it was about biology. And what happened is we had a student who used AI learning techniques on data that had to be integrated to predict toxicity in chemicals that hadn’t been tried yet. And the reason we could put it together was we had… Actually, in this particular case, I wasn’t the AI expert because she was using some advanced stuff from IBM and it was a joint project with them. I was the one who was really the data integration expert. So the problem was you could get this mouse data, this test data and this test data, how do we get that all into one place. And then we worked with one of our top scientists here who really understood liver toxicity. And we had something that can actually do pretty good liver toxicity predictions. And that’s a very early start. So that was again, a combination of capabilities that made it possible for us to solve a very hard problem.

(01:08:07):

And now she’s working at a company that’s looking at second and third generations of this thing to really use it and understand that they can’t just say to some AI person, here’s the data, go off and fix this. They understand that they have to have these people working together. So I’m actually trying to convince the university to create essentially a multidisciplinary project based, let’s take the big problems of the world, use those as the curriculum and start asking, not, how do we teach people AI, but how do we teach people to work together with AI and scientists and politicians and whoever else needs to be involved, lawyers. So there will be very specific applications, business, prediction, things like that. But I think the bigger thing, to me, the exciting thing is-

Dr. James Hendler (01:09:03):

… bigger thing to me, the exciting thing is that it can be the glue that glues together a lot of stuff because what we have in common is language. And so having these language models, they may not understand what we’re talking about, but they can almost in a sense translate because they can talk biology okay and they can talk AI okay and they can talk this other thing okay. And so now the team can much more quickly communicate with each other. We’re also working on a lot of things that are collaborative visualization, collaborative conversation, supported mostly by machine learning, not these newer AI techniques, but we’re starting to bring in more and more AI. Again, so it’s these combinations, but we don’t train people how to work together. People who leave universities with PhDs, a lot of them join companies who complain that the first thing they have to do is teach these people how to work with other people. And many of the ones who don’t want to work with other people come back to the university. It’s going to change. I think the world’s going to change, but I know one or two people who won’t use Waze or a GPS or something like that. It’s a very small minority. Because people are saying, “Hey, this thing does a lot for me.” And they’re not asking it to explain why it chose this route or how it did this thing or stuff like that. It reached a certain point to credibility, and I think a lot more AI steps can reach that point. It’s going to start creeping. It has already started creeping into our lives as evidenced by the fact that you and I are talking over a link and you’re recording and it’ll eventually get transcribed, et cetera, et cetera. But I don’t see that that’s a threat.

(01:10:47):

I see that as a promise because if we can get people to learn to work together, technically, at least in theory, we could get them to work together socially, community, et cetera. And our world is changing in that way. Our world is changing from everybody grows up and marries somebody and has their however many kids and blah, blah, blah, to a world where there’s a lot more diversity. And getting that world of diversity where people can communicate across these barriers is going to take learning how to talk to other people, which is something I’m trying to learn how to do.

Nick (01:11:27):

I only have two more questions for you before I ask you the GRTiQ 10, and these are fun questions I ask each guest of the podcast every week. The first question is, if you don’t mind, can you share your perspective opinion on the emergence of Web3? And I don’t know if you can only speak for yourself or if you have a sense about how people of your generation that were early on in the tech space and the web build space, if they recognize Web3 as sort of an evolution and an exciting new experiment or how they perceive it.

Dr. James Hendler (01:11:57):

I’ve written articles and worked with people on this, so I have my opinions, but I think the answer is, if we move away from particular technology and talk about that issue of privacy control and decentralization and ways that communities really can somehow interact in a way that it’s not necessarily owned by some mega company that’s trying to commercialize it. Can we recapture the early web? Probably not. Can we recapture some of the anarchy of the early web? Maybe. But I’m not sure that the technologies that are currently people’s top choices are the right ones get us there because one of the big fights we had in creating the web was keeping it one web. Lots and lots of people wanted this web and that web and we’ll get them to talk to each other. And if you look at what’s happening now in knowledge graphs, if you look at what’s happening in corporate chat program, it’s back to that everybody’s in their own corner.

(01:13:00):

And that’s why I can’t order those books that I mentioned before because this bookstore and that bookstore and that bookstore and that thing, all the information’s there, but how do you put it together? And I think the decentralization that Web3 promises is something that many of us see as potential, but there’s two pieces of it. One is the stuff I’ve been talking about, which we call composability, and then of course there’s privacy protection that requests all of this stuff. And then there’s this ownership and things. So cryptocurrencies focus much more on the ownership. Some of these other solutions focus much more on the interoperability. I think as that stuff grows together, and I think AI will be helpful because, again, if somebody will teach it to talk these spreadsheets and someone else will teach it to talk these spreadsheets, and we can somehow bring that together, then you and I can suddenly access both in some way.

(01:14:02):

We may not have it in full technical depth. And then of course we have to figure out how to control that and live with that. Well, most people in my generation don’t know what Web3 is, but they don’t know what Web2 [inaudible 01:14:16]. Your listeners can’t see how great my hair is and how little of it is on my head. I think we have a long way to go. I think a lot of the positive things about the web being distributed need to be recaptured. I think there were some very negative things.

(01:14:33):

Those you see much more now, for example, in social media and things like that, that once you can connect to a lot of people and you have your own choice of who to connect to, you have a whole different kind of notion of community. And I think a lot of the people who are pushing Web3.0 are very happy with that notion of community as long as they’re the one who controls it and they’re the one who makes money from it. And remember, years ago in this conversation, I said what made the web work was that Tim gave it away. And I think we have to start getting some Web3.0 technologies that if not given away at least start to return to us some of the financial benefit of our own information and things like that.

Nick (01:15:20):

And then, Jim, the final question I want to ask you, and I got to say, fully honest to all the listeners, this is probably an interview that could go on for hours and hours because we didn’t even get to touch on a lot of the experiences you had at DARPA and other things that I think would be a lot of interest to listeners and to the history of technology and where we are today. But the last question I want to ask you is about your legacy then. So how do you think about your legacy? How do you want to be remembered for the things that you contributed to, the things that you worked on? Do you think about that?

Dr. James Hendler (01:15:50):

Most of us don’t have any chance of going down in history. I at least have a vague chance of going down as a footnote in Tim Berners-Lee [inaudible 01:15:59]. I’m pretty happy with that. So frankly, I’m really not concerned with, “Do I get this credit? Does this thing happen?” I want to see the world realize that we can’t solve these big problems given the fractured communities we’ve created and the web created communities that then broke. And I want to put those communities back, and I wish I knew how to do it, but at least I want to be one of the people who’s saying, “We have to think about it.” And I don’t expect to be one of the inventors of that thing, but boy, I sure would like it to be one of my grad students who goes off and does that.

Nick (01:16:43):

Well, then I lied. I want to ask one follow-up to that. Is there a contribution or a point of your career that will always stick out in your mind as something you’re most proud of?

Dr. James Hendler (01:16:52):

It’s the Semantic Web stuff. So the phrase Semantic Web started to disappear around 2012. And a lot of that was because Google started to actually use the stuff and didn’t want to say there were other people. So they came up with this notion of calling it Knowledge Graph, which was another emerging [inaudible 01:17:12]. So now a lot of the Semantic Web work is presented at things called Knowledge Graph Conference. But what I do see happening, in 1980s, I was publishing some papers about how we have to get the symbol side of AI and the neural side of AI talking together. I wish I’d been smart enough to call it neuro symbolic AI because then I’d have a real legacy [inaudible 01:17:36].

(01:17:36):

But for what it’s worth, I think bringing data to the web, bringing data together, and keeping the focus that we can’t do this without knowledge, the bathtub full of words of a ChatGPT is never going to replace us. It’s our ability to create, think, do things in different ways. And I think the Semantic Web is part of how that communication has to happen. Because again, we can’t have this conversation unless we talk the same language. And I don’t necessarily mean English, I mean technology. If you went out on the street and asked somebody about, “What’s your legacy to Web3?”, you’re going to get blank stares. I mean, we have to be in a space where we share a lot of concepts.

Nick (01:18:25):

Well, Jim, this has been an absolute honor and thrill to be able to interview you, and it’s my thrill now to ask you the GRTiQ 10. Now, these are 10 questions I ask each guest at a podcast every week. And I do it because I think it introduces listeners to new ideas, or maybe they can try something new, and of course, potentially they could achieve more in their own life. So Jim, are you ready for the GRTiQ 10?

Dr. James Hendler (01:18:46):

Yeah. I feel like I’m at the final exam.

Nick (01:19:03):

What book or article has had the most impact on your life?

Dr. James Hendler (01:19:07):

I tried really, really hard to figure that out, and it’s a lot of them because you see the way I think is broad. I was and am a big science fiction fan. I think science fiction often not predicts the future, but thinks about the issues of, “What is a society if this happens, what is society…” So it would be something in the science fiction realm, but I don’t think I can point to any one book or series because it’s that stuff coming together that gets me excited.

Nick (01:19:41):

And is there a movie or a TV show that you would recommend everybody’s got to watch

Dr. James Hendler (01:19:48):

2001: A Space Odyssey. There’s a generation of us about my age who started AI, started doing computer science/AI in the late ’60s, early ’70s, and we used to be referred to as the HAL generation. Because we’re the ones who watched the movie, looked at HAL and said, “We want to build that.” Not the ones who looked at the movie and said, “God, I’m scared of that thing.” A quick footnote I say is, the other reason I want people to watch it is I watched it with one of my nephews a few years ago. He’s now more grown up, but he wasn’t at all fascinated by HAL. He already had computers that talked to him all over the place.

(01:20:33):

He was fascinated by the idea of the humans going back… A human trip, a colony on the moon and trips to Saturn. I really hope we go back there. And so I think the movie has two points. One is it’s a scare movie about AI, which is where a lot of people got their ideas, but it’s not really that realistic. And then it’s a movie that says, “But our aspirations should go way beyond the computer. It wasn’t the computer that did that movie.” Yeah, the ending was weird, okay, but what the hell. Now that cannabis is legal, I even suggest [inaudible 01:21:12].

Nick (01:21:13):

That’s a great suggestion. Jim, if you could only listen to one music album for the rest of your life, which one would you choose?

Dr. James Hendler (01:21:20):

It would be a Beatles album, but I don’t know which one. Either the White Album or Sgt. Pepper’s. But that dates me some.

Nick (01:21:29):

And how about this one? What’s the best advice someone’s ever given to you?

Dr. James Hendler (01:21:33):

I’m going to have to clean it up a little bit, but at one point in my career, I was getting all this different advice from different people. And again, I was pre-tenure and it was about how to get tenure and, “You should do this and you should do that and you shouldn’t do this thing.” And I went to one of the faculty members who had been around a long time. I said to him, How do I deal with this?” And he said a four letter word, but I’ll just say he said, “Ignore them.”

(01:22:07):

And I realized that that was actually in some ways… What he really said to me was, “Look, trust your instincts. You’re the one who’s going to make your life. If you do what other people tell you and you succeed, well, you’ll succeed because of what they told you, and it won’t be your success. And if you don’t succeed, well, you’ll learn from your failures and go on and succeed in other ways.” It’s the advice I give to every junior faculty member I mentor is, “The number one piece of advice I can give you is don’t listen to any advice, including this one.”

Nick (01:22:44):

And this is an incredible question for someone like you, but what is one thing you’ve learned in your life that you don’t think most other people have learned or know quite yet?

Dr. James Hendler (01:22:54):

So I was taught something by working with Tim Berners-Lee. Tim’s absolute superpower is looking at a huge structure and saying, “Here’s the crack in the foundation that’s going to cause it to fall down.” So the reason the web works where virtually all of the previous hypertext didn’t is Tim’s realization that if you didn’t have links that broke, you couldn’t have scaling. And I could take another hour to explain why that’s true, but just trust me on it. So there used to be a slogan, it’s the 404 era that built the web. And then I was involved in a project where Tim was just monitoring at a distance. We had a total of 7,500 email messages in the mailing thread for the working group. And at one point, Tim sent me a message, 4,000 messages is this thing saying, “Jim, you got to step in and fix this thing.”

(01:23:51):

And I’m sitting there looking at it going, “What?” This is one technical issue among a zillion, and then I started thinking hard about it. He was absolutely right, and it eventually became the thing that was the most powerful feature. Getting it right was what made a lot of the stuff we did work. So I asked him about that and he said, “Look, start by questioning the assumptions. What most people do is they start with the assumptions, and then only after things break do they go [inaudible 01:24:22]. Start by saying, “If I live by these assumptions, well, let me do what I need.”

(01:24:28):

So that was something that I find when I say to people, a lot of them look at me like, “No, that’s not the right way to do it. You don’t question the received knowledge.” And the answer is, maybe some of it. I don’t go back and look at some of the calculus stuff and say, “Hey, maybe [inaudible 01:24:47] differently.” But there were mathematicians who become very famous by doing exactly that. And I think it’s really both your own assumptions, but even more importantly, the assumptions of people who say, ” You can’t do that because…” Look at those becauses. “You can do that, but only if…” Look at those only ifs. And I think that took me a lot of years to learn.

Nick (01:25:13):

What’s the best life hack you’ve discovered for yourself?

Dr. James Hendler (01:25:17):

It’s going to come across a little weird, but it’s learning to do this. We’ve had a long conversation about technology, and I know there’s words I’ve used that some listeners would have to scratch their heads a little bit, but I’m learning how to talk about technology to real people because we’re never going to solve a lot of these problems like the fear of AI, like the regulation of AI, like these new chemicals and vaccine, blah, blah, blah, unless people who do science and technology learn how to speak to normal people. And we’re not necessarily normal people.

Nick (01:25:58):

I know I’m not. How about this one, Jim? Based on your own life experiences and observations, what’s the one habit or characteristic that you think best explains how people find success in life?

Dr. James Hendler (01:26:14):

Learning that they’re not necessarily the smartest person in the room. I will admit personally, it took me a lot of years till I got there, and I would not be a well-known scientist in this space if I hadn’t learned that lesson. In fact, at DARPA, it was my guiding mantra. When I looked at the grants, I said, “Who are the people in this list who have a better idea than I do? Let’s make sure they’re in the room.” Where a lot of DARPA guys say, “My idea is the thing we’re going to build.”

(01:26:48):

I was like, “I want to surround myself by the smartest people I can find, because maybe they’re smarter than they are.” I still have trouble learning to listen. But what people don’t realize is, you asked me a long time ago about growing up in New York, one of the problems with growing up in New York is you learn to talk and listen at the same time. And most people in the world get very upset by that so they think you’re not listening to them. That’s taken me a long time to shake. I’m still not great at it. But I hear every word they say, and I take it to heart. I may disagree, I may agree, but you got to listen.

Nick (01:27:22):

And then the final three questions are complete the sentence type questions. So the first one is, “The thing that most excites me about the future of AI is…”?

Dr. James Hendler (01:27:32):

That I don’t really know the future of AI, that it’s ours to make.

Nick (01:27:37):

And how about this one? If you’re on Twitter, and I know you don’t do a ton of social stuff, so let’s just say, “If you want to follow the latest or the best trends related to AI, you should be following or staying in touch with…”?

Dr. James Hendler (01:27:51):

A very wide group of people. Look at what the recommender system is telling you, and then don’t follow only those people. Try to find others. Go through the long list of people that’s showing you and say, “Who are the people on this list I don’t know who might be interesting?” Go look at what you can see from it. So I do actually do a fair amount of social stuff. It’s just, I’m still looking for the right platform. And I really liked microblogging, but I have some trouble with the current system. Follow a lot of people, listen to a lot of things. When I watch the news, I watch the one I like, but then I check what the same stories look like on the guys from the other side. I usually don’t agree with them, but at least I know what they’re saying.

Nick (01:28:42):

And then, Jim, the final question, “I’m happiest when…”?

Dr. James Hendler (01:28:47):

I’m happiest when my personal life, my academic life, and my what I like to call traveling, hobbies, things like that, all come together.

Nick (01:29:12):

Dr. James Hendler, what an absolute thrill to have you on the GRTiQ podcast to meet you and to hear these stories and also these ideas that you’ve shared. I am grateful for your time and I really appreciate it. If listeners want to learn more about you, stay in touch with the things you’re working on, how do you suggest they do it?

Dr. James Hendler (01:29:29):

Wikipedia is a good starting place. My own web page, as I say, is out of date, but usually has stuff. And really search and listen to my podcast, look for a video, things like that. Typically, the shorter things I do are probably the most interesting, unless it’s something like this where we’re trying to do a deep dive into a long-term thing. So my technical talks, most of them are out there on the web. People who really want to follow that level of detail, they’re there. If you search on my name and public radio, you’ll find that I’m on a lot of panels and things for one of the Northeast public radio stations. Often those are talking about the things I’m most interested in.

YOUR SUPPORT

Please support this project
by becoming a subscriber!

CONTINUE THE CONVERSATION

FOLLOW US

DISCLOSURE: GRTIQ is not affiliated, associated, authorized, endorsed by, or in any other way connected with The Graph, or any of its subsidiaries or affiliates.  This material has been prepared for information purposes only, and it is not intended to provide, and should not be relied upon for, tax, legal, financial, or investment advice. The content for this material is developed from sources believed to be providing accurate information. The Graph token holders should do their own research regarding individual Indexers and the risks, including objectives, charges, and expenses, associated with the purchase of GRT or the delegation of GRT.

©GRTIQ.com