1 1
nigel99

AI discussion

Recommended Posts

So as not to derail the Harris thread, brought the discussion on AI here as I found it interesting. First off I think like most buzzwords I think it gets misused a lot. Pictures are no longer “photoshopped”, now every doctored image is ‘AI’ as an example.

I’ll admit I know very little about what goes on under the hood. I’ve had some experience with machine learning where algorithms find patterns in data correlations that we tend to miss. I know nowadays that’s called AI - although I’m not sure I agree completely with that. 

I learned something interesting in the other thread about the data sources used by the big AI engines. I had assumed incorrectly that only ‘quality’ data was used. As was pointed it out it’s interesting that it draws on forums, social media and basically everything on the public net. I’m not sure if they have weightings on the data sources or not or if it’s simply volume based statistically. 

I use Chatgbt quite a lot for work and study, and it’s a productivity booster. Upload log files and get a quick plain English description, generate small python or bash scripts for work. It definitely makes mistakes, but generally saves a lot of time. Or stock takes, which are the bane of my boss and my life, due to shitty company structure they have us managing stock for our product line. Explained to senior management that having a guy with a PhD in mechanical engineering, and me with a Masters in Communications was expensive and could be done by a high school student fell on deaf ears. Chatgbt has been great, take a photo of the shelf and get counts back in 30 seconds.

Doing a psych degree involves a huge amount of reading and filtering through lots of garbage that gets published, so being able to source papers (not using AI), but then getting the synopsis and sections saves literally hours of reading. You’ve still got to actually read the papers, that you shortlisted, so it’s not doing the ‘thinking’ and critical analysis.

If it’s purely volume based, it’s obviously vulnerable to the old google front page SEI manipulation techniques. 

 

  • Like 1

Share this post


Link to post
Share on other sites

The problem is that in 10 years it’ll be better at some things, and profit-oriented management will command “the AI said to do it, so do it.” And  the guardrails of people who think laterally will be removed. From what I’ve read, within well-delimited domains, AI rules. Throw in a monkey wrench, and you get wack job

Wendy P. 

Share this post


Link to post
Share on other sites

One of the points Colbert made in that interview was really interesting, regarding the 'intellectual laziness' that AI can induce.  It reminded me of a podcast I was listening to about Air France flight 447, which crashed largely due to pilots not grasping what was going on with the plane, as they'd never really flown without a reliable auto-pilot (they kept trying to climb in the face of stall warnings).

AI can be a great tool, but my biggest concern is what happens if the lights go out after we've trained ourselves to outsource our critical thinking to it.  

  

  • Like 2

Share this post


Link to post
Share on other sites
24 minutes ago, lippy said:

One of the points Colbert made in that interview was really interesting, regarding the 'intellectual laziness' that AI can induce.  It reminded me of a podcast I was listening to about Air France flight 447, which crashed largely due to pilots not grasping what was going on with the plane, as they'd never really flown without a reliable auto-pilot (they kept trying to climb in the face of stall warnings).

AI can be a great tool, but my biggest concern is what happens if the lights go out after we've trained ourselves to outsource our critical thinking to it.  

  

My biggest concern is naive idiots in management positions assigning tasks to AI without even understanding what it is, and the limitations. And from my miserable experience in dealing with corporate management, I know there is an inverse relationship between technical knowledge and the level a person has reached in the management heirarchy.

On a different aspect of AI:

"AI" has entered The Official Corporate Buzzword List. There is no regulatory body to enforce proper use of the term, so every corporate management/marketing drone is now chirping "Our product contains AI!" regardless of whether it really contains machine-learning or not. I have already seen marketing for products claiming to contain AI that I know damned well don't. At this point the term is so over-used, that claims should be interpreted as "Our product contains software that runs on a CPU".

  • Like 2

Share this post


Link to post
Share on other sites
1 minute ago, ryoder said:

My biggest concern is naive idiots in management positions assigning tasks to AI without even understanding what it is, and the limitations. And from my miserable experience in dealing with corporate management, I know there is an inverse relationship between technical knowledge and the level a person has reached in the management heirarchy.

On a different aspect of AI:

"AI" has entered The Official Corporate Buzzword List. There is no regulatory body to enforce proper use of the term, so every corporate management/marketing drone is now chirping "Our product contains AI!" regardless of whether it really contains machine-learning or not. I have already seen marketing for products claiming to contain AI that I know damned well don't. At this point the term is so over-used, that claims should be interpreted as "Our product contains software that runs on a CPU".

You mean like my senior management uses ‘Management Consultants’ without understanding the limitations and what the technical details are ^.^.

My stocktake gripe from earlier, management response was “we will employ someone in Poland because Australia is too expensive”. Product is manufactured and consumed in Australia:( But management consultants said to consolidate low cost labour in Poland

  • Like 1

Share this post


Link to post
Share on other sites
42 minutes ago, lippy said:

One of the points Colbert made in that interview was really interesting, regarding the 'intellectual laziness' that AI can induce.  It reminded me of a podcast I was listening to about Air France flight 447, which crashed largely due to pilots not grasping what was going on with the plane, as they'd never really flown without a reliable auto-pilot (they kept trying to climb in the face of stall warnings).

AI can be a great tool, but my biggest concern is what happens if the lights go out after we've trained ourselves to outsource our critical thinking to it.  

  

We are probably well down that road already in technology. In CAD and Software many engineers already don’t know the fundamentals without the tools. 

A real danger as a few have mentioned already is part of the laziness is blindly trusting the output.

  • Like 1

Share this post


Link to post
Share on other sites
31 minutes ago, ryoder said:

My biggest concern is naive idiots in management positions assigning tasks to AI without even understanding what it is, and the limitations. And from my miserable experience in dealing with corporate management, I know there is an inverse relationship between technical knowledge and the level a person has reached in the management heirarchy.

On a different aspect of AI:

"AI" has entered The Official Corporate Buzzword List. There is no regulatory body to enforce proper use of the term, so every corporate management/marketing drone is now chirping "Our product contains AI!" regardless of whether it really contains machine-learning or not. I have already seen marketing for products claiming to contain AI that I know damned well don't. At this point the term is so over-used, that claims should be interpreted as "Our product contains software that runs on a CPU".

Yeah I'm dealing with a couple of software houses right now that are writing user interfaces for hardware that I'm developing.  It's a frequent topic when we start talking candid with each other that they're bombarded by clients requesting they add AI to their software package...often without knowing why to add it, what it would do to better the product, or even what it is...they just want AI because it's fashionable. 

Share this post


Link to post
Share on other sites
38 minutes ago, lippy said:

Yeah I'm dealing with a couple of software houses right now that are writing user interfaces for hardware that I'm developing.  It's a frequent topic when we start talking candid with each other that they're bombarded by clients requesting they add AI to their software package...often without knowing why to add it, what it would do to better the product, or even what it is...they just want AI because it's fashionable. 

You’ve got to think like a sales or management person. Filtering, linear regression, or bit of averaging no problem slap an AI label on it!

Share this post


Link to post
Share on other sites
6 hours ago, wmw999 said:

The problem is that in 10 years it’ll be better at some things, and profit-oriented management will command “the AI said to do it, so do it.” And  the guardrails of people who think laterally will be removed. From what I’ve read, within well-delimited domains, AI rules. Throw in a monkey wrench, and you get wack job

Wendy P. 

Not within ten years - now. Ai tools, with no independent verification of their accuracy or biases, are already being used in areas like law enforcement, border control, retail security to root out suspected criminals and undesirables. 
When a security guard kicks a black teenager out of a shopping mall because AI says he looks like a shoplifter is the guard going to care enough to check or is it just ‘the system said so’?

If you get arrested and put in jail because an AI tool says you match someone with warrants then it’s probably not even good enough to check and find the error at that point.

On the flip side they are apparently very good at scanning and finding abnormalities on X-Rays, with the absolute expectation that doctors will check anything that gets flagged.

Share this post


Link to post
Share on other sites
15 hours ago, nigel99 said:

A real danger as a few have mentioned already is part of the laziness is blindly trusting the output.

Hi Nigel,

This reminds me of something Prof. Klump, my physics instructor many, many yrs ago, once told us about.  A few yrs earlier, he had had a really bright young student.  The task was to, with the info given, determine the height of the Great Pyramid.  This student made a small mistake & came up with 25 miles high.  Obviously, not possible.

Klump said to always look at your results and to think about them as to could they even be in the realm of possibility.

I have never forgotten that.

Oh, yes; my local tv news people now seem to use AI in just every story except the weather.  What a bunch of clowns.

Jerry Baumchen

 

Share this post


Link to post
Share on other sites

I see a big future for AI in work and office environments. Specially when looking for information, or in situations I currently need one of my analysts to model out in excel.

 

Sure I could ask my assistant to find a specific piece of information somewhere, but how much quicker would it be if I can just ask our office AI? The current version of the Microsoft AI offering is already getting close to being able to do that.

 

Share this post


Link to post
Share on other sites

I have spent the last 8 months working on a Data Governance project. One of the biggest parts of the project is the integration with AI. It took me at least a month to repeatedly explain that when we talk about data governance and AI, it isn't about "integration with" but about "protection from." I have found it very difficult to get management to understand that the biggest concern of AI in a corporate environment is protecting internal corporate data from leaking into the AI learning model and your confidential data becoming part of the model. The introduction of "AI" to the masses has changed a lot of the concepts and previous best practices concerning data governance.

Edited by okalb

Share this post


Link to post
Share on other sites
15 minutes ago, okalb said:

I have spent the last 8 months working on a Data Governance project. One of the biggest parts of the project is the integration with AI. It took me at least a month to repeatedly explain that when we talk about data governance and AI, it isn't about "integration with" but about "protection from." I have found it very difficult to get management to understand that the biggest concern of AI in a corporate environment is protecting internal corporate data from leaking into the AI learning model and your confidential data becoming part of the model. The introduction of "AI" to the masses has changed a lot of the concepts and previous best practices concerning data governance.

100% correct. My biggest fear as we work through introducing AI internally. Microsoft claims their Ai "Copilot" can be integrated into sharepoint, would not share data outside the organization AND allows for restricting access to information by user.

We are currently testing it for meeting minutes and tasks, which has been pretty decent. It listens to the meeting, keeps a transcript and automatically assigns tasks to people as decided in the meeting. 

Have been trying to get my owners to understand that employees having access to chatgpt means that likely company information is likely being shared on that platform.

Share this post


Link to post
Share on other sites
2 minutes ago, SkyDekker said:

100% correct. My biggest fear as we work through introducing AI internally. Microsoft claims their Ai "Copilot" can be integrated into sharepoint, would not share data outside the organization AND allows for restricting access to information by user.

This is exactly what I have been working on. Microsoft Purview is part of M365 (depending on licenses). Purview is Microsoft's data governance product. If all of the exfiltration settings are configured correctly. It can be relatively well protected. The problem is that is is also very easy to misconfigure it and not realize it. The project I have been working on is training for internal MS security engineers on how to properly configure their customer's environments to allow the use of Copilot, while protecting confidential data from being leaked. We have had to rewrite many of the standards and best practices along the way.

  • Like 1

Share this post


Link to post
Share on other sites
44 minutes ago, okalb said:

This is exactly what I have been working on. Microsoft Purview is part of M365 (depending on licenses). Purview is Microsoft's data governance product. If all of the exfiltration settings are configured correctly. It can be relatively well protected. The problem is that is is also very easy to misconfigure it and not realize it. The project I have been working on is training for internal MS security engineers on how to properly configure their customer's environments to allow the use of Copilot, while protecting confidential data from being leaked. We have had to rewrite many of the standards and best practices along the way.

I'll look forward to use the fruits of your labour!

  • Like 1

Share this post


Link to post
Share on other sites

Regarding a more mundane application of “AI”, I help to moderate a forum that is largely devoted to fossil identification.  Lately we are getting lots of posts from people who are confused by identifications given by a google ap.  Most of the ID suggestions are so far off as to be comical.  One poster was concerned that google misidentified a common fossil as a toxic mineral, and was insistent that they needed to go to the hospital immediately.  I’m sure it’s a challenge to get an algorithm to parse out the meaningful data from a poorly photographed image, make appropriate comparisons to hundreds of thousands (or millions) of possibilities, and make a plausible suggestion as to an identification.  However we have many humans on the forum who are excellent at doing just that.

Share this post


Link to post
Share on other sites
2 hours ago, GeorgiaDon said:

One poster was concerned that google misidentified a common fossil as a toxic mineral, and was insistent that they needed to go to the hospital immediately.

Clearly their definition of 'immediately' is not the same as mine ; p

Share this post


Link to post
Share on other sites
On 9/30/2024 at 11:13 PM, johnhking1 said:

If AI searches the internet for information, if you post enough bad or inaccurate information will AI come up with bad answers.

It will also come up with bad answers all on its own. It doesn't just copy and paste information from the web, right? It generates new sentences and paragraphs to convey that information to you. But it doesn't know what any of the things it says to you actually mean. It has no human ability to do a gross error check on whether the information sounds right, it's just putting together sentences that sound good in the desired language style. 

So a lawyer using AI to write briefs was found out because they were riddled with references to caselaw and even circuit courts that simply don't exist, and AI cookbooks will tell you how to make a lovely chinese inspired chicken, strawberry jam and garlic lasagne.

Share this post


Link to post
Share on other sites
10 minutes ago, jakee said:

It will also come up with bad answers all on its own. It doesn't just copy and paste information from the web, right? It generates new sentences and paragraphs to convey that information to you. But it doesn't know what any of the things it says to you actually mean. It has no human ability to do a gross error check on whether the information sounds right, it's just putting together sentences that sound good in the desired language style. 

So a lawyer using AI to write briefs was found out because they were riddled with references to caselaw and even circuit courts that simply don't exist, and AI cookbooks will tell you how to make a lovely chinese inspired chicken, strawberry jam and garlic lasagne.

I’ve seen a few articles saying that AI has suggested cooking recipes for Mustard gas and similar. The whole point is it is ‘supposed’ to be an aid, you still need intelligence and to verify the results.

There is a really interesting BBC article on mapping the brain of a fly for the first time and AI was an enormous help - but it made 3 Million mistakes that had to be corrected by hand.

Share this post


Link to post
Share on other sites

Join the conversation

You can post now and register later. If you have an account, sign in now to post with your account.
Note: Your post will require moderator approval before it will be visible.

Guest
Reply to this topic...

×   Pasted as rich text.   Paste as plain text instead

  Only 75 emoji are allowed.

×   Your link has been automatically embedded.   Display as a link instead

×   Your previous content has been restored.   Clear editor

×   You cannot paste images directly. Upload or insert images from URL.

1 1