Know the rules The Paceline Forum Builder's Spotlight


Go Back   The Paceline Forum > General Discussion

Reply
 
Thread Tools Display Modes
  #31  
Old 05-21-2024, 04:09 PM
zap zap is offline
Senior Member
 
Join Date: Jan 2004
Posts: 7,131
At some point in the future AI is going to conclude that humans are a waste of resources...............
Reply With Quote
  #32  
Old 05-21-2024, 04:19 PM
verticaldoug verticaldoug is offline
Senior Member
 
Join Date: Nov 2009
Posts: 3,355
Quote:
Originally Posted by Wolfman View Post
This. And as echelon john pointed out, it's the deductive AI that's interesting.

I just got back from a conference (deep dive into Echocardiograms) where I heard about a well known clinic that took 15k patient EKGs that had been followed by an Echo within 7 days that was read/coded by an expert-level Cardiologist and fed that into the "AI" that's tacked on to their medical records system... after that training, they're feeding _EKGs only_ into the system and getting likely Echo findings back with a remarkably high level of accuracy.

So, for me, the level of real signal that can be pulled from what used to be noise, especially in imaging, is crazy.
The hardest part of all this was the curation of the 15k EKG and corresponding Echo. Labeling, normalizing and sequencing in time, once that was done, training the NN should be relatively straight forward. Predictive AI is really powerful and why image recognition is such a killer.

Kaggle is an AI website which is a bit of a clearing house for competition with datasets like this with decent prize money, and usually the sponsoring organization offers people jobs. Jane Street, Two Sigma, Zwillow, Lyft, Microsoft, Cancer Institute and many others have all sponsored competitions for prize money.

https://www.kaggle.com/competitions/...lenge/overview

This was probably the coolest of all the competitions. Passenger Screening for Department of Homeland Security. First price was $500,000 but you needed to deliver and document the winning code. (you probably were hired into DHS too) Of the 519 teams who entered, 396 actually submitted results. Your dataset was 134 GB to train with.

https://www.kaggle.com/competitions/...ics-ai-climsim

https://www.kaggle.com/competitions/...challenge-2024

Last edited by verticaldoug; 05-21-2024 at 04:22 PM.
Reply With Quote
  #33  
Old 05-21-2024, 04:22 PM
verticaldoug verticaldoug is offline
Senior Member
 
Join Date: Nov 2009
Posts: 3,355
Quote:
Originally Posted by zap View Post
At some point in the future AI is going to conclude that humans are a waste of resources...............
We end up being the battery, remember?
Reply With Quote
  #34  
Old 05-21-2024, 04:57 PM
HenryA HenryA is offline
Senior Member
 
Join Date: May 2009
Posts: 3,026
At its current state AI is making things up in some fields. There are documented cases of lawyers being sanctioned by the courts for filing briefs with fabricated case law. Which is not really the fault of the AI or its developers, but rather the lawyer who did not check the work.

It needs checking if you're using it for "real". Where I have found it useful is for generating "internet writing", which I define as seemingly authoritative verbiage that meet minimum standards of believability. When you are too lazy or short on time, AI can generate some fluff that sounds about right. Bull**** for a blog, etc.
Reply With Quote
  #35  
Old 05-21-2024, 05:22 PM
Wolfman Wolfman is offline
Senior Member
 
Join Date: Mar 2013
Location: Westside Los Angeles
Posts: 436
Quote:
Originally Posted by verticaldoug View Post
... Kaggle...
Super cool... thanks for the link. I feel that, for now, better understanding how to query the existing generative language AIs and how to structure a NN will be about as far as I go!

I'm definitely not a fan of the "I need some text to make them think I did the work/put some thought into this" angle... it's the opposite of what I want from all of us!
Reply With Quote
  #36  
Old 05-21-2024, 05:34 PM
e-RICHIE's Avatar
e-RICHIE e-RICHIE is offline
send me the twizzlers yo
 
Join Date: Dec 2003
Location: outside the box
Posts: 2,217
I agree with Noam Chomsky when he refers to A.I. as "high-tech plagiarism.”
__________________
Atmo bis

Last edited by e-RICHIE; 05-21-2024 at 05:39 PM.
Reply With Quote
  #37  
Old 05-21-2024, 06:37 PM
eri eri is offline
Junior Member
 
Join Date: Aug 2012
Posts: 8
1) There's no doubt at all that 'ai' is a really effective tool at solving certain problems. Look at alpha go where a machine taught itself to play go better than anyone has before. I've used 'ai' to come up with great approaches to handle complex situations where the math was too complex for me. Is pretty amazing to throw situations at the machine and in an hour its handling it better than any algorithm I could come up with. So thats a very real benefit. This type of 'auto solve' is world changing.

2) ChatGPT/Large language models: There's a set of people who are fooled to believe that these large language models can write code, will soon be writing legal briefs, etc. Meanwhile the llm can't add numbers and literally have no internal semantic. My experience with chatgpt and copilot is that it writes something that looks like code, but iterate a few times fixing bugs and you realize it is a complete idiot sociopath with no idea what its doing. Huge holes in its implementations which it is oblivious of. Have you ever met someone with great hair who speaks confidently about solutions, but when you try and drill down on any approach you find out there's nothing between their ears? Thats ChatGPT. Go ahead and ask it to prove that P=NP, a few months ago it gave me a 25 step proof but try to follow it you realize there's big gaps and its only a proof up until you try to read it closely. The llm have their place - maybe as call center operators, but they sure don't have any intelligence. Best description I saw: syntactic parrot.
Reply With Quote
  #38  
Old 05-21-2024, 07:29 PM
dgauthier dgauthier is offline
Senior Member
 
Join Date: Dec 2003
Location: Los Angeles, CA
Posts: 1,437
Quote:
Originally Posted by eri View Post
(...) Have you ever met someone with great hair who speaks confidently about solutions, but when you try and drill down on any approach you find out there's nothing between their ears? Thats ChatGPT.(...)
I have observed the same thing in my (admittedly limited) experience with ChatGPT. I asked it a bunch of technical questions I already knew the answers to. It provided authoritative answers to all of them, but about 50% were wrong.

Not impressive. At all.
Reply With Quote
  #39  
Old 05-21-2024, 07:49 PM
Erikg Erikg is offline
Member
 
Join Date: Nov 2023
Posts: 48
Quote:
Originally Posted by e-RICHIE View Post
I agree with Noam Chomsky when he refers to A.I. as "high-tech plagiarism.”
I really like Noam Chomsky, he tells it how it is. I'm surprised by his comment though, sounds more luddite than critical thought. Mind sharing a link?

edit: found a video, Chomsky's big concerns are on disinformation, and false reality with AI tech: https://www.youtube.com/watch?v=_04Eus6sjV4

I think it's fair to say that society can't go backwards, only forward. If we are already going down a path of several human made disasters, AI could be a tool that helps provide an acceleration of understanding in science and solutions to these problems. Of course it could be abused, I guess I'm more of an optimist. With AI, I think of the early computer chess games that slowly outpaced the best chess players. AI in its maturity could be a tool to help solve many of the worlds mysteries....like bike geometry.

Last edited by Erikg; 05-21-2024 at 08:30 PM.
Reply With Quote
  #40  
Old 05-21-2024, 09:49 PM
peanutgallery peanutgallery is offline
Senior Member
 
Join Date: Jan 2009
Location: 717
Posts: 3,994
I always saw Terminator as it was truly intended...a warning

Quote:
Originally Posted by zap View Post
At some point in the future AI is going to conclude that humans are a waste of resources...............
Reply With Quote
  #41  
Old 05-22-2024, 12:07 AM
RFC's Avatar
RFC RFC is offline
Senior Member
 
Join Date: Apr 2008
Location: Scottsdale AZ
Posts: 1,659
I've spent the last 9 months studying, writing and speaking about the potential of AI to impact the legal profession and legal ethics.

Frankly, the potential for change is enormous and far more than I can discuss here or you would want me to.

For me, the real value of AI is its application to verified specialized data bases, i.e., for medicine, all medical journals; for lawyers, all judicial opinions; for scientists, all published studies; for corporate inhouse counsel, all contracts the company is in.

One of my sons is a PhD molecular biologist. He uses specialized AI systems to write code and test chemical formulas.

Here is the precursor article to my next which will apply these principles to
AI and based on my lecture at the ASU GETS 2024 conference.l

https://www.linkedin.com/pulse/lawye...R1j%2F4g%3D%3D

Last edited by RFC; 05-22-2024 at 05:41 AM.
Reply With Quote
  #42  
Old 05-22-2024, 12:28 AM
verticaldoug verticaldoug is offline
Senior Member
 
Join Date: Nov 2009
Posts: 3,355
Quote:
Originally Posted by dgauthier View Post
I have observed the same thing in my (admittedly limited) experience with ChatGPT. I asked it a bunch of technical questions I already knew the answers to. It provided authoritative answers to all of them, but about 50% were wrong.

Not impressive. At all.
Eri, I think you mean stochastic parrot. It is a reference to a famous paper Timnit Gebru wrote while at Google showcasing alignment and other issues with LLMs back in 2021. It is one of the things that probably contributed to her getting fired. link https://dl.acm.org/doi/pdf/10.1145/3442188.3445922

DG, chatGPT is a language model. It is trained to predict the next word so it can create text. That is what the AI is trained to do. The fact that it produces some factual data when answering is a side effect. Being wrong is not a surprise to people who work on the models. Most people are fooled by the fluency. I would never use a non-grounded model relying on prior knowledge (training data) to answer questions for me.

People in the field using these systems often instruct the system not to rely on prior knowledge and only use data from the context provided. From my own experience, it is just as important to tell the system what not to do, as it is to do. Seems odd, but the system is just following your instructions verbatim. It is like an extremely gullible child but industrious child.

As a parent, if your child wants to learn to program, I'd enroll them in debate. Prompting and working with language models is a lot like debate and will be a better skill to learn.

AI in general is still relying on hype. All you have to do is watch Google I/O or OpenAI latest presentations, and they are bamboozling you with wow. But in reality, the well applied AI will just be there seamlessly in your environment.

Whenever a passenger who comes thru an e-gate at a UK airport, they just interacted with AI to have a faster, better experience at immigration. This will be the future.

GEN AI is at the stage of the Iphone without many apps, or when Netscape launched in 1994 and there weren't many websites to visit on the www. The AI change will be just as large if not larger than those changes. Just like Netscape is not around anymore, I doubt most of these Gen1 GenAI models and companies will survive.

Last edited by verticaldoug; 05-22-2024 at 12:48 AM.
Reply With Quote
  #43  
Old 05-22-2024, 01:22 AM
tabasco tabasco is offline
Member
 
Join Date: May 2022
Location: Flat part of Germany, close to dutch border
Posts: 47
Sorry for my poor English but this is a human-crafted reply: We see two cycles in parallel that we need to separate: a) technology maturity and b) technology adoption.

AI/ML is nothing new - like many of you might know has it been developed in the 1960s. What changed since then are a) processing power (see NVIDIA and Moore´s law), b) the availability of training/validation data (see Internet content) and c) business use cases (medical, law science) and finally d) soft computing courses on universities. The results in some niches, such as pattern recognition on X-rays are mind blowing. A recent test of the Google Med-Gemini model in this field showed that the computer had a 91% hit rate in identifying a specific illness. The average true positives of human MDs how ran the same test procedure was in the 80s%. The average result of fresh MD graudates is in the 60´s%. Is this an improvement? Very much so. What we perceive, as mortal humans who are not in this business, is only the tip of the iceberg. Many, many applications are already ML powered or supported without our knowing. E.g. product recommendations at AMAZON is one of the more popular ones. I am not a fan, but this technology is here to stay and the positive impact will strengthen over time. Not linear but exponentially. The bottleneck today is energy consumption and water consumption for data center cooling. Example: An average chat (3-5 questions and answers) with GPT4 requires 1/2 liter of cooling water.

Last edited by tabasco; 05-22-2024 at 02:57 AM.
Reply With Quote
  #44  
Old 05-22-2024, 03:33 AM
dgauthier dgauthier is offline
Senior Member
 
Join Date: Dec 2003
Location: Los Angeles, CA
Posts: 1,437
Quote:
Originally Posted by verticaldoug View Post
[B](...) DG, chatGPT is a language model. It is trained to predict the next word so it can create text. That is what the AI is trained to do. The fact that it produces some factual data when answering is a side effect. Being wrong is not a surprise to people who work on the models. Most people are fooled by the fluency. I would never use a non-grounded model relying on prior knowledge (training data) to answer questions for me. (...)
Thank you for that. You inspired me to do a little research. I realize I was like the blind man feeling the trunk of an elephant, not seeing the whole picture.

While LLM's will certainly be applied to human language tasks, I did not appreciate the really exciting (to me) capabilities involving abstract pattern recognition, such as analyzing DNA, proteins, and molecules, and detecting signals, data irregularities, and fraud. I need to catch up. Thanks again.

Last edited by dgauthier; 05-22-2024 at 03:43 AM.
Reply With Quote
  #45  
Old 05-22-2024, 05:25 AM
54ny77 54ny77 is offline
Senior Member
 
Join Date: Jul 2009
Posts: 13,026
Will it tell us what chain lube is best?
Reply With Quote
Reply


Posting Rules
You may not post new threads
You may not post replies
You may not post attachments
You may not edit your posts

BB code is On
Smilies are On
[IMG] code is On
HTML code is Off

Forum Jump


All times are GMT -5. The time now is 09:58 AM.


Powered by vBulletin® Version 3.8.7
Copyright ©2000 - 2024, vBulletin Solutions, Inc.