Do you have a software design idea? Kaggle could help this for free

0

In a recent Mind matters podcast, “Artificial General Intelligence: The Modern Homunculus,” Director of the Walter Bradley Center Robert J. Marks, a professor of computer engineering, spoke with Justin Bui from his own research group at Baylor University in Texas about what’s happening – and not happening – in artificial intelligence today. The big story turned out to be all the free software you can use to advance your own projects. This time around, Dr Bui is focusing on what open source (free) is Kaggle software can do this for you, including competitions.

Call it science no-fiction, if you like …


This game starts at 12:58 min. A partial transcript and notes, View notes and additional resources follow.

Justin Bui: Kaggle is owned by Google; I believe they were acquired a bit recently. It is an open source platform that provides computational resources for data scientists and machine learning engineers. But of course everyone has access to it, if you have an email you can access it.

It’s a website that allows people to post contests, so there are a lot of design contests of different types. There is image classification, stock price prediction, housing price prediction …

An example [of a competition] is that the NFL has a headset detection competition underway at the moment, in cooperation with Amazon Web Services.

Robert J. Marks: OK wait. NFL helmet detection?

Justin Bui: They are trying to develop a system capable of detecting and tracking the location of the helmets for the players… So it is part of the safety of the players.

They are looking for ways to automate this. If you think of a human in the system, a referee should watch How many of the field? Well, really, all of it. So they miss some things every now and then. And when you think about it from a player safety perspective, you want to minimize or ideally eliminate some of those hard knocks altogether. If you were to develop an AI system to do this, you could shift that burden …

Robert J. Marks: I can also see that this is used by people such as neuroscientists to study the impact of these collisions on brain development. We had a guest some time ago, Yuri Danilov, a neuroscientist who did a fascinating job… and he refused to let his children play football, until his eldest son was finally part of a team. And I said, “Well, what happened? I thought you banned it. He said,” I was in the minority, “so his kid literally played soccer.

But I could see that following that in real time would be really interesting because you could measure, for example, the acceleration of the helmet, you could do the… I’m going to be a little out of date here: I think at the beginning of physics , everyone talks about distance, speed, acceleration. And then I learned, when I worked for Boeing, that each of them is linked by a higher derivative in calculus.

So you start with the distance, you get the speed, you get the acceleration. And then, what is the derivative of the acceleration? It’s something called jerk. And if your acceleration changes really quickly, you’ve got a jerk associated with you. I could see [this] used with AGI to monitor for jerks, which I think neuroscientists would find very interesting in terms of tracking potential brain damage.

Who is involved in this, is it the universities, is it the companies, is it both?

Justin Bui: One thing about platforms like Kaggle is that it’s really anyone. Anyone who wants to participate can join, so I think from my observation it’s a lot of individuals. You can actually join teams and coordinate around the world, really. If you wish, there are several multinational teams. I think the most important thing to take away from this is development crowdsourcing, so to speak. So you can, in a way, shell out what seems like a pretty large amount of money, but in the grand scheme of things, from a business perspective, it’s relatively small and basically get unrestricted access to intellectual property which is developed primarily on the cheap.

Robert J. Marks: Wow, this is really interesting. These are companies that, if you will, outsource their R and D to competitions and probably get much cheaper results than hiring a bunch of experts and trying to fix the problem locally.

Justin Bui: Yes exactly.

Robert J. Marks: You mentioned to me that by monitoring these things on Kaggle, you did not see any advance from AGI [artificial general intelligence] but, in a way, a reversal of the AGI. Could you repeat what you told me about this?

Justin Bui: Yes of course. I think to sum it up, what we see, like you said, is a 180. You really almost see this hyperspecificity in a lot of applications. If you go by and observe a lot of competitions that have ended where a lot of competitors have shared their code, you see a lot of evidence of transfer learning, so of course there is network reuse and all that. .

Robert J. Marks: Wait, just expand, just a second on transfer learning. This is how I understand transfer learning. Suppose you have a neural network trained on dogs, you have trained this neural network to detect dogs, and you have to spend a lot of time understanding this neural network and training this neural network to recognize dogs. dogs. Now you want to come and you want to classify the cats. Well, it turns out that classifying cats is similar to classifying dogs, so why should you go back and start over with zero? Why couldn’t you use part of this dog neural network to train the cat neural network? And the art of doing that is called, I believe, transfer learning. Is it right?

Justin Bui: Yeah, that’s a good example: “Hey, why reinvent the wheel when I have a system that gives me 85% of a wheel? So yes, you are there.

Robert J. Marks: Alright, good. Despite all these challenges with AGI and your observation that it goes the other way, we may be waiting for a new theoretical breakthrough, which I think will never be achieved. But nonetheless, there are people who believe that we are taking steps towards AGI… And the word I used for that was so called keyboard engineer. These are people who, when looking for a solution, don’t sit down and look at the theory, but go straight to the keyboard. You had some interesting comments on this. Can you expand on this topic?

Justin Bui: Yes of course. It’s one of those things some of my colleagues and I jokingly call the engineers at Stack Overflow.

Robert J. Marks: Stack Overflow is a website, isn’t it?

Justin Bui: It’s a forum where people can post errors or issues they’re having with their code, and it’s a community solution house, if you will. But it’s pretty funny because some of the coworkers I’ve had over the years joked, “Okay. Hey we just had this problem, let me go check Stack Overflow real quick. Chances are, someone has already done this. I’ll just reuse it. And so I think that also feeds into some of the AGI belief. “Well, open AI has produced neural networks X, Y, Z” and, “Oh, well, Google and the Google Brain team have published on the A, B, C works, and if we can start to merge them, the system just get super smart. and so I think in some ways it’s fueled a lot by what people are seeing in big business.

It was funny because when I think of AGI I think of Hal 9000 or Skynet or, for those of you who are more into the more recent movies, Ultron – those systems which apparently have unlimited resources and infinite knowledge and, obviously, bad intentions. I think that’s one of the things that helps capture people’s attention and their creativity too.

But, I think at the end of the day, Bob, people go straight to the keyboard. They don’t sit down thinking about how to approach a problem. How to solve it from a theoretical point of view, then start to deploy it? It’s really more, “Well, okay. I have to make a classifier that tells me the difference between kumquats and giraffes, ”and they sit down and start coding.

Skynet (in the movies, not on open source websites):

Robert J. Marks: And so they import these things and download the software and use the software as a black box, without looking at the deeper theory of how it is created and the computing where it came from and the possibilities of making AGI at the to come up.

They don’t cover some of the things we talk about on Mind matters: they do not address the Lovelace test, for creativity, which has never been demonstrated in artificial intelligence. They don’t even talk about simple counter-arguments, like Searle’s Chinese Room

Robert J. Marks: Okay, one last comment?

Justin Bui: In some ways, AI and machine learning have become catchphrases around the world. I used to joke that AI is very similar to the word “synergies” in business. Synergies, everyone wants synergies. The new thing is that everyone wants machine learning. They do not necessarily understand what it is, it is… Typically, it is: “I have just been given a mission, I have two weeks to do it. I’ll go to my keyboard and start writing some code.

In any event, Kaggle is real and Ultron isn’t (except, of course, in the movies):


Here is the first part of the discussion:

If not Hal or Skynet, what is really going on in AI today? Justin Bui talks to Robert J. Marks about the remarkable free-to-download and usable AI software resources. Free AI software means that many more innovations now depend on who gets to the finish line first. Marks and Bui believe this will spark creative competition.

You can also read: The Harvard U Press computer author gives AI control over reality. Erik Larson explained at COSM 2021 the actual limits for machines that don’t live in the real world to understand. Computers, he said, find it very difficult to understand a lot of things that are intuitive to humans and there is no clear programming track to change that.

and

Jonathan Bartlett: An interview with the author “Learning to Program with Assembly” teaches programmers the language necessary for a better understanding of their computer. Knowing assembly language as a programmer is like understanding the mechanics of a race car as a NASCAR driver, explains Bartlett.

Show Notes

  • 00:44 | Homunculus
  • 03:21 | Introducing Justin Bui
  • 04:10 | AI software
  • 06:04 | Fast AI
  • 12:58 | Deepfake technology
  • 20:03 | Transfer learning
  • 23:25 | The kidnapping of nerds
  • 28:59 | Little faith in AGI

Additional resources

Download the podcast transcript


Source link

Share.

Comments are closed.