Know the rules The Paceline Forum Builder's Spotlight


Go Back   The Paceline Forum > General Discussion

Reply
 
Thread Tools Display Modes
  #1  
Old Yesterday, 03:22 PM
XXtwindad XXtwindad is offline
Senior Member
 
Join Date: Nov 2017
Posts: 8,069
OT: Are we on the cusp of an “A.I. bubble?”

Read an interesting op-Ed in the NYT the other day. It’s fair to say the author is not bullish on the future of A.I., which she writes has been severely overhyped. The subject has always interested me, but it’s above my pay grade.

https://www.nytimes.com/2024/05/15/o...ated-hype.html

“The biggest question raised by a future populated by unexceptional A.I., however, is existential. Should we as a society be investing tens of billions of dollars, our precious electricity that could be used toward moving away from fossil fuels, and a generation of the brightest math and science minds on incremental improvements in mediocre email writing?

We can’t abandon work on improving A.I. The technology, however middling, is here to stay, and people are going to use it. But we should reckon with the possibility that we are investing in an ideal future that may not materialize.”


Anyone more well-versed on the subject?
Reply With Quote
  #2  
Old Yesterday, 03:38 PM
Turkle Turkle is offline
Senior Member
 
Join Date: Dec 2020
Location: RVA
Posts: 1,481
I work in an industry that is fully on-board with the AI fad, which is utterly embarrassing.

Let me tell you as clearly as I can: "AI" as it presently exists is, outside of a very few niche applications, at best an interesting toy, and at worst a dangerously unreliable solution. Its most useful applications appear to be plagiarism and the production of unbelievable amounts of useless trash content at scale, all at the cost of devastating power consumption and all the attendant consequences for climate change.

The hype around "AI" (I can't help putting it in quotes, as there is nothing even remotely similar to human intelligence happening here) is exactly like the hype around the Metaverse a year or so ago. Don't remember anything about the Metaverse? Of course you don't, it was complete BS. God, remember all the idiocy surrounding "The Blockchain" and "NFTs?"

The ridiculous hype surrounding the Metaverse and "AI" can both be explained fairly simply. The largest tech companies have already pretty much shot all their best bullets. The information revolution is complete. But Wall Street still demands explosive growth. So, in the absence of any actually useful new technologies, they are reduced to hyping inane fads.

"AI" is a solution looking for a problem, and it will be largely forgotten in a year or two.

Last edited by Turkle; Yesterday at 03:42 PM.
Reply With Quote
  #3  
Old Yesterday, 03:44 PM
C40_guy's Avatar
C40_guy C40_guy is offline
Senior Member
 
Join Date: Aug 2008
Location: New England
Posts: 6,004
Quote:
Originally Posted by XXtwindad View Post
Anyone more well-versed on the subject?
Pretty much everyone, as compared to the writer.

She did get one thing right:

"The reality is that A.I. models can often prepare a decent first draft."

AI is in the middle of a hype curve, not unlike the tulip mania upswing of many years ago. The marketing hype would have you believe that Ai, and specifically generative AI, is the answer to all the worlds problems, challenges and ills.

Um, no.

And by the way, I'm no AI fan boy. I do have some hands-on experience....I built my first expert system in 1986. On a PC.

As a tool, genAI can be quite useful, just like a reciprocating tool. Use it for the right task, and life is good. Use it for the wrong task, and you're going to make a mess.

GenAI is just a tool.
__________________
Colnagi
Seven
Sampson
Hot Tubes
LiteSpeed
SpeshFatboy
Reply With Quote
  #4  
Old Yesterday, 03:46 PM
CMiller CMiller is offline
Senior Member
 
Join Date: Jun 2012
Location: Menlo Park, CA
Posts: 1,169
I use ChatGPT every day to help me write R code as an economic consultant, it literally makes me 10x more productive when I have a complex programming task.

It's not hype for me.
Reply With Quote
  #5  
Old Yesterday, 03:50 PM
CMiller CMiller is offline
Senior Member
 
Join Date: Jun 2012
Location: Menlo Park, CA
Posts: 1,169
I wouldn't consider data analytics and coding to be niche industries, and nearly every data scientist/analyst/programmer my age I know uses it (Mid-30s).

You can be cautious about its potential, but so confidently putting down its utility is short sighted when it's already making real improvements.
Reply With Quote
  #6  
Old Yesterday, 03:54 PM
charliedid's Avatar
charliedid charliedid is offline
Senior Member
 
Join Date: Mar 2010
Location: Chicago
Posts: 13,002
I have no idea.
Reply With Quote
  #7  
Old Yesterday, 04:11 PM
prototoast prototoast is offline
Senior Member
 
Join Date: Jun 2016
Location: Concord, CA
Posts: 5,978
Quote:
Originally Posted by CMiller View Post
I use ChatGPT every day to help me write R code as an economic consultant, it literally makes me 10x more productive when I have a complex programming task.

It's not hype for me.
If you're an economic consultant and live where you say you do then there's a 50% chance that I had your exact exact job about a decade ago, and can finally say "back in my day..." we had to google how to code the stuff we didn't understand, not any of this chatGPT stuff.

Anyway, I think computer programming is an area where AI is really good (it kind of makes sense that a computer ought to be good at telling you what to tell a computer so the computer does what you want it to do). Now, my economic work involves much less coding and much more subject matter analysis, and the use of generative AI is both prohibited by my organization and functionally useless.

It's good where it's good, and I think it has a lot of productivity-enhancing capabilities, but I am also skeptical that it will totally transform society in the short-to-medium term. So I guess maybe I'm aligned with the median expectations about AI?
__________________
Instagram - DannAdore Bicycles
Reply With Quote
  #8  
Old Yesterday, 04:19 PM
Shane4XC Shane4XC is offline
Senior Member
 
Join Date: Apr 2023
Posts: 196
Quote:
Originally Posted by Turkle View Post
I work in an industry that is fully on-board with the AI fad, which is utterly embarrassing.

Let me tell you as clearly as I can: "AI" as it presently exists is, outside of a very few niche applications, at best an interesting toy, and at worst a dangerously unreliable solution. Its most useful applications appear to be plagiarism and the production of unbelievable amounts of useless trash content at scale, all at the cost of devastating power consumption and all the attendant consequences for climate change.

The hype around "AI" (I can't help putting it in quotes, as there is nothing even remotely similar to human intelligence happening here) is exactly like the hype around the Metaverse a year or so ago. Don't remember anything about the Metaverse? Of course you don't, it was complete BS. God, remember all the idiocy surrounding "The Blockchain" and "NFTs?"

The ridiculous hype surrounding the Metaverse and "AI" can both be explained fairly simply. The largest tech companies have already pretty much shot all their best bullets. The information revolution is complete. But Wall Street still demands explosive growth. So, in the absence of any actually useful new technologies, they are reduced to hyping inane fads.

"AI" is a solution looking for a problem, and it will be largely forgotten in a year or two.
I’m not sure what industry you’re in but AI has way more consequences than just plagiarism and creating junk.

A couple of side projects I’ve personally touched with AI:
Anomaly detection for intrusion detection
Object detection within geospatial images

And those projects were wildly successful. Everyone sees language models and thinks that’s all AI encompasses.
Reply With Quote
  #9  
Old Yesterday, 04:34 PM
verticaldoug verticaldoug is offline
Senior Member
 
Join Date: Nov 2009
Posts: 3,331
I think you are using the wrong metric. When you say 'Bubble' you are really speaking about valuation in an economic sense. In 1999, the internet craze was a bubble, and the market did crash and a lot of bad ideas went out of business. However, the internet proved really useful and has continued to grow to valuations which are really quite staggering when compared to 1999. The Internet was really useful before the iphone (or smartphone) in general. But the ability to have access to the world wide web while mobile has really changed things.

So is AI a bubble. Yes, probably since the same lack of discipline for investing in any hairbrained startup seems to be upon us. But will AI prove extremely useful and is here to stay? Without a doubt. In 15 years, will we look back and be staggered on how far we have progressed, we are just beginning to see the build out of the software stack which will really improve the useability of GenAI. Just like in 1999, when the crash comes for AI, you need to take a deep breath and buy some of the survivors because those will continue to grow.

For CMiller, I also use chatGPT for coding in python. It takes the drudgery out of a lot bulk programming, and allows you to spend more time the functionality you are hoping to achieve. It's great.

For prototoast, out of the box, chatGPT is not useful for economics, but you need to think about how and why you use it. If you are using it to just generate text from prior knowledge, it's as worseless as most economic opinions. If you add additional knowledge to 'ground' the system in facts, curtail its use of 'prior knowledge' in the system, I think you can get a really useful co-pilot. I know what Salesforce is doing with their Einstein Co-pilot to integrate it with their CRM, and you get some real pick-ups in productivity. In a general sense, this is also what Perplexity is trying to achieve with search+AI with transparent use of citations for results.

I suspect for most users, they write a lazy short prompt and expect a miracle. They get a disappointing result and say AI is hype. But if you actually spend time learning how to instruct in detailed concise language, you may surprise yourself.

The biggest problem is many 'experts' aren't really experts and they are just talking garbage. As someone mentioned Metaverse, yeah many of the so-called metaverse experts on social media are now experts on prompting etc.

Generative AI is just one aspect of AI. Pure generation because of the sampling probability inherent in the models, will always be fraught with hallucinations. However, for models grounded with additional curated factual information, it can be great.

Predictive AI, scoring models, image recognition etc, are all useful in their own right. Shane mentioned some of these in his post.

The big question is will these LLMs lead to AGI. I doubt it. I believe Yann LeCunn when he says we need to find another way.

Arthur C. Clarke said, “Any sufficiently advanced technology is indistinguishable from magic”. To most people this is just magic.

Last edited by verticaldoug; Yesterday at 04:54 PM.
Reply With Quote
  #10  
Old Yesterday, 04:46 PM
Davist's Avatar
Davist Davist is offline
Senior Member
 
Join Date: Aug 2013
Location: Philadelphia, PA
Posts: 1,614
I'm in the power end of the data center business, certainly money is being spent at an unprecedented rate. Is it a bubble? perhaps, but as above, there are a lot of useful things being done across a wide range of fields with it. I view it as an extension of machine learning, which at its basic form helps automate things like the spam detector on your email. It's very narrow based, and I share the view that "general" version of AI is a long ways off and may not even be possible. I do understand the plagiarism argument, especially with writing and art images, it's hard to view otherwise. It's interesting this has been a topic in science fiction, like in Dune where they've sworn off AI and use human computers and some other being hopped up on "spice" (sea creature or something) for the navigators.

Last edited by Davist; Yesterday at 04:48 PM.
Reply With Quote
  #11  
Old Yesterday, 04:51 PM
CMiller CMiller is offline
Senior Member
 
Join Date: Jun 2012
Location: Menlo Park, CA
Posts: 1,169
Quote:
Originally Posted by prototoast View Post
If you're an economic consultant and live where you say you do then there's a 50% chance that I had your exact exact job about a decade ago, and can finally say "back in my day..." we had to google how to code the stuff we didn't understand, not any of this chatGPT stuff.

Anyway, I think computer programming is an area where AI is really good (it kind of makes sense that a computer ought to be good at telling you what to tell a computer so the computer does what you want it to do). Now, my economic work involves much less coding and much more subject matter analysis, and the use of generative AI is both prohibited by my organization and functionally useless.

It's good where it's good, and I think it has a lot of productivity-enhancing capabilities, but I am also skeptical that it will totally transform society in the short-to-medium term. So I guess maybe I'm aligned with the median expectations about AI?
I agree with all of this.

For programming it is absolutely useful. I switched my main analytic program from Stata to R within a few weeks thanks to AI. ChatGPT isn't prohibited here, but every single sentence we send is recorded to make sure we do not share client info. You can click a setting for ChatGPT to not record your inputs to use it for their model building, but we take extra precautions.

It's easy enough to feed dummy data: "I have a sheet of data with row names: var1, var2, var3, listed in wide/long/whatever format." It is useful when you are applying a new statistical technique, or creating client-ready formatted documents that would be tedious to figure out step by step.

And yes, I am a new entry into on one of those firms that are across the street from eachother, small world!
Reply With Quote
  #12  
Old Yesterday, 05:02 PM
el cheapo el cheapo is offline
Senior Member
 
Join Date: May 2018
Posts: 309
What's scary about AI is the ability to mimic voices and add them to video and have complete control of whatever message you want to send. This is going to be a nightmare for national security.
Reply With Quote
  #13  
Old Yesterday, 05:02 PM
verticaldoug verticaldoug is offline
Senior Member
 
Join Date: Nov 2009
Posts: 3,331
Quote:
Originally Posted by Davist View Post
It's interesting this has been a topic in science fiction, like in Dune where they've sworn off AI and use human computers and some other being hopped up on "spice" (sea creature or something) for the navigators.
How can you forget the first season of Picard? AI is an existential threat.

Or Vger from Star Trek the Motion Picture?

Last edited by verticaldoug; Yesterday at 05:06 PM.
Reply With Quote
  #14  
Old Yesterday, 05:12 PM
prototoast prototoast is offline
Senior Member
 
Join Date: Jun 2016
Location: Concord, CA
Posts: 5,978
Quote:
Originally Posted by verticaldoug View Post
For prototoast, out of the box, chatGPT is not useful for economics, but you need to think about how and why you use it. If you are using it to just generate text from prior knowledge, it's as worseless as more economic opinions. I think if you add additional knowledge to 'ground' the system in facts, curtail its use of 'prior knowledge' in the system, I think you can get a really useful co-pilot. I know what Salesforce is doing with their Einstein Co-pilot to integrate it with their CRM, and you get some real pick-ups in productivity.
I wouldn't rule out the possibility that someone could use it more effectively than me, but there are a couple of big obstacles.

1) I am often dealing with nonpublic information, and I cannot input that knowledge into the system because that is sharing the information beyond where it is authorized to be shared (it's conceivable in the future each organization may have its own closed instance of chatGPT but we're not there yet).

2) My current work is largely in tax policy, in which there is significant intersection of law and economics. There have been plenty of documented instances where generative AI has simply made up laws or legal citations, and in these cases, that's negative value add. If, for example, I am trying to explain why corporate tax receipts shifted in response to a change in law, it's really, really important I've got all the details of the law changes right. There are some folks out there looking to develop special implementations for this purpose, but this is a pretty huge endeavor on its own, and from what I've seen, they're not ready for primetime yet.

3) I'm often working on very niche topics, and in some cases I have plausibly read all there is to read on a topic. For very big topic areas, I could imagine a prompt along the lines of "summarize the literature on the economic consequences of the collapse of Lehman Brothers" might yield something informative, but "summarize the literature on the economic consequences of consumers not paying excise taxes on fishing rods purchased directly from foreign sellers" yields about as much information as if I asked you that question and you just rambled off whatever came to mind.
__________________
Instagram - DannAdore Bicycles
Reply With Quote
  #15  
Old Yesterday, 05:22 PM
verticaldoug verticaldoug is offline
Senior Member
 
Join Date: Nov 2009
Posts: 3,331
1) I am often dealing with nonpublic information, and I cannot input that knowledge into the system because that is sharing the information beyond where it is authorized to be shared (it's conceivable in the future each organization may have its own closed instance of chatGPT but we're not there yet).

I know several large firms which have their own instances of OpenAI, Google which are unique to them for use of in-house information.


2) My current work is largely in tax policy, in which there is significant intersection of law and economics. There have been plenty of documented instances where generative AI has simply made up laws or legal citations, and in these cases, that's negative value add. If, for example, I am trying to explain why corporate tax receipts shifted in response to a change in law, it's really, really important I've got all the details of the law changes right. There are some folks out there looking to develop special implementations for this purpose, but this is a pretty huge endeavor on its own, and from what I've seen, they're not ready for primetime yet.

Yes, not allowing the LLM to use 'prior knowledge' but instead give it data retrieval of an indexed legal database where it can search and give transparent traceable citations which can be checked. There is a start-up called HarveyAI which is trying exactly this. I'd expect someone like WestLaw to indice their libraries, and build a retrieval augmentation system for the LLMS. I don't know why this can't be done. As long as you have transparency of citations lazy people actually check.

The problem is less the AI and more the lazy user.

Last edited by verticaldoug; Yesterday at 05:32 PM.
Reply With Quote
Reply


Posting Rules
You may not post new threads
You may not post replies
You may not post attachments
You may not edit your posts

BB code is On
Smilies are On
[IMG] code is On
HTML code is Off

Forum Jump


All times are GMT -5. The time now is 12:54 AM.


Powered by vBulletin® Version 3.8.7
Copyright ©2000 - 2024, vBulletin Solutions, Inc.