Chevy Tri Five Forum banner

Yahoo comment issues

2.2K views 29 replies 16 participants last post by  Milos  
This AI crap is getting carried away... in many places! There's NOTHING magic about AI.. it's programmed by 'computer programmers'... The only thing relating to "Intelligence* is the software should be using past/historical information in it's responses... EVERYTHING ELSE is 'artificial' crap ...
"It's programmed by computer programmers" - it sounds to me like you're not familiar with the concept of "machine learning", which is a key part of present-day AI.

For a quick overview, see https://cloud.google.com/learn/artificial-intelligence-vs-machine-learning . The important concept here is - the computer "teaches" itself through adaptive algorithms to improve its accuracy.

Of course, it's been shown that AI can "learn" to be as biased as a human being, based on the data it consumes.
 
Geek: Tell us all about your knowledge of AI and 'machine learning'? I worked a project (for the government) in the mid-80s' which employed a form of AI even then. I'm sure it's 'evolved some' since then, but maybe we'll let you explain all the advances to us here?
"Evolved some" is putting it mildly...

In the late 1980s, I was in graduate school and took a course in AI. What I remember about it was - at the time, it was just whole lot of searching and sorting algorithms. It was kind of like an upgraded version of an undergraduate level data structures course.

AI was 2 semesters at the time, but I was unimpressed enough not to take the second semester. I selected a different class instead for spring semester.

If you're interested in knowing what's currently being taught in AI - and it's a whole different animal now - you might want to peruse this online course catalog: Computer Science (CSC) < North Carolina State University

I'm retired now, but the last few years I was employed, my employer was doing a deep dive into development of AI. I was not in one of the departments doing that work. I did see demos of their applications, though.
 
On the topic of AI, I do think it is amazing what it can accomplish. I am not saying it is always good, but it is very complex and can go through a tremendous amount of data to reach its conclusion. That conclusion isn't always right. I worked in tech for a financial firm and the company brought in an AI firm to do a pilot to simulate a trading desk. They spent 6 months running through data and then spent a month doing the simulation. In the end, the traders did a better job using a spreadsheet. That is not to say more data or a better company wouldn't have done a better job, but my point is that it isn't infallible. There are places AI can be really beneficial, such as medicine. I don't like all the companies that think it can replace everyone's job.

Art
Art, when was this AI work done (what year)? Having been aware of the advancement of AI through my job before I retired, the timeframe context interests me.
 
I took an AI course at Berkeley in the early 80s as an undergraduate.
It seemed like a waste of time to me; it was relatively primitive.
During most of my career (I retired in 2017), my view was that the oodles of data the government and other entities collected on the populace would stress even the ability to manage and store it, much less use it to some advantage.

That all changed with the advent of algorithms to store and analyze big data and the recently useful and productive AI.
I no longer have to read all of the reviews on products I'm considering -- the AI summaries are pretty good.
But we should worry about where all of this is headed.
Regarding our loss of privacy over the last few decades, I like to quote Leonard Cohen: "Everybody knows that the war is over. Everybody knows the good guys lost."
A good summary of the situation.

An AI course at Berkeley in the early '80s should have been a pretty good snapshot of the current state of the art. While I took my AI course in the late '80s on the east coast, it sounds like we both came away with pretty much the same impression.

To really put all this in historical context, we need to roll back to where things stood in the late 1970s and early 1980s.

First - there was no internet as we know it. There was DARPAnet, but that was for the federal government (especially military) and some universities. Packet switched computer data networks for connecting commercial computers (think IBM mainframes and such) were just starting to gain real momentum. Small computers used 300 or 1200 baud dialup modems until the 2400 and 4800 baud came along.

In the early 1980s, I worked for a company that made telephone central office equipment. Our location primarily focused on voice digitization and transmission equipment. (The big-honking switches were made elsewhere.)

At that time, microprocessors were having a huge impact on designing digital electronics. Microprocessor-based embedded software systems had a firm foothold and were gaining ground rapidly. Personal computing, on the other hand...

Two of my coworkers were enamored with their Apple IIs, and talked about them a lot. One day, my curiosity got the better of me, so I asked one of them what they actually did with their Apples. "Mostly balance my checkbook and play Space Invaders". I thought, "what a waste of money, if that's all it's good for".

One of my more interesting assignments during this period was on a data switch for computer networks. This was a totally new area for us, as telephone networks at the time were circuit-switched as opposed to packet-switched. I was testing each new software build and reporting bugs. Unfortunately, the dumb-a***s higher up in management couldn't decide whether to charge forward and launch the product or shelve it. By the time they made up their minds, other companies had beat us to market, so they canned the project.

The rapid advance of raw computing power at an affordable price, the omnipresent internet, and advancement of the state of the art of AI led us to where we are today. Is that a good thing? I think Vich's closing statement sums up many things....
 
One final thought on AI:

After finishing grad school and back to a full time job, one of my coworkers was talking about AI. He said, "you know - the development of artificial intelligence means we're also creating artificial stupidity".

He said this as a joke. I don't think he realized at the time just how prophetic that comment would turn out to be.