nigel99 475 #1 Posted September 27, 2024 So as not to derail the Harris thread, brought the discussion on AI here as I found it interesting. First off I think like most buzzwords I think it gets misused a lot. Pictures are no longer “photoshopped”, now every doctored image is ‘AI’ as an example. I’ll admit I know very little about what goes on under the hood. I’ve had some experience with machine learning where algorithms find patterns in data correlations that we tend to miss. I know nowadays that’s called AI - although I’m not sure I agree completely with that. I learned something interesting in the other thread about the data sources used by the big AI engines. I had assumed incorrectly that only ‘quality’ data was used. As was pointed it out it’s interesting that it draws on forums, social media and basically everything on the public net. I’m not sure if they have weightings on the data sources or not or if it’s simply volume based statistically. I use Chatgbt quite a lot for work and study, and it’s a productivity booster. Upload log files and get a quick plain English description, generate small python or bash scripts for work. It definitely makes mistakes, but generally saves a lot of time. Or stock takes, which are the bane of my boss and my life, due to shitty company structure they have us managing stock for our product line. Explained to senior management that having a guy with a PhD in mechanical engineering, and me with a Masters in Communications was expensive and could be done by a high school student fell on deaf ears. Chatgbt has been great, take a photo of the shelf and get counts back in 30 seconds. Doing a psych degree involves a huge amount of reading and filtering through lots of garbage that gets published, so being able to source papers (not using AI), but then getting the synopsis and sections saves literally hours of reading. You’ve still got to actually read the papers, that you shortlisted, so it’s not doing the ‘thinking’ and critical analysis. If it’s purely volume based, it’s obviously vulnerable to the old google front page SEI manipulation techniques. 1 Quote Share this post Link to post Share on other sites
wmw999 2,447 #2 September 27, 2024 The problem is that in 10 years it’ll be better at some things, and profit-oriented management will command “the AI said to do it, so do it.” And the guardrails of people who think laterally will be removed. From what I’ve read, within well-delimited domains, AI rules. Throw in a monkey wrench, and you get wack job Wendy P. Quote Share this post Link to post Share on other sites
lippy 918 #3 September 27, 2024 One of the points Colbert made in that interview was really interesting, regarding the 'intellectual laziness' that AI can induce. It reminded me of a podcast I was listening to about Air France flight 447, which crashed largely due to pilots not grasping what was going on with the plane, as they'd never really flown without a reliable auto-pilot (they kept trying to climb in the face of stall warnings). AI can be a great tool, but my biggest concern is what happens if the lights go out after we've trained ourselves to outsource our critical thinking to it. 2 Quote Share this post Link to post Share on other sites
ryoder 1,590 #4 September 27, 2024 24 minutes ago, lippy said: One of the points Colbert made in that interview was really interesting, regarding the 'intellectual laziness' that AI can induce. It reminded me of a podcast I was listening to about Air France flight 447, which crashed largely due to pilots not grasping what was going on with the plane, as they'd never really flown without a reliable auto-pilot (they kept trying to climb in the face of stall warnings). AI can be a great tool, but my biggest concern is what happens if the lights go out after we've trained ourselves to outsource our critical thinking to it. My biggest concern is naive idiots in management positions assigning tasks to AI without even understanding what it is, and the limitations. And from my miserable experience in dealing with corporate management, I know there is an inverse relationship between technical knowledge and the level a person has reached in the management heirarchy. On a different aspect of AI: "AI" has entered The Official Corporate Buzzword List. There is no regulatory body to enforce proper use of the term, so every corporate management/marketing drone is now chirping "Our product contains AI!" regardless of whether it really contains machine-learning or not. I have already seen marketing for products claiming to contain AI that I know damned well don't. At this point the term is so over-used, that claims should be interpreted as "Our product contains software that runs on a CPU". 2 Quote Share this post Link to post Share on other sites
nigel99 475 #5 September 27, 2024 1 minute ago, ryoder said: My biggest concern is naive idiots in management positions assigning tasks to AI without even understanding what it is, and the limitations. And from my miserable experience in dealing with corporate management, I know there is an inverse relationship between technical knowledge and the level a person has reached in the management heirarchy. On a different aspect of AI: "AI" has entered The Official Corporate Buzzword List. There is no regulatory body to enforce proper use of the term, so every corporate management/marketing drone is now chirping "Our product contains AI!" regardless of whether it really contains machine-learning or not. I have already seen marketing for products claiming to contain AI that I know damned well don't. At this point the term is so over-used, that claims should be interpreted as "Our product contains software that runs on a CPU". You mean like my senior management uses ‘Management Consultants’ without understanding the limitations and what the technical details are . My stocktake gripe from earlier, management response was “we will employ someone in Poland because Australia is too expensive”. Product is manufactured and consumed in Australia:( But management consultants said to consolidate low cost labour in Poland 1 Quote Share this post Link to post Share on other sites
nigel99 475 #6 September 27, 2024 42 minutes ago, lippy said: One of the points Colbert made in that interview was really interesting, regarding the 'intellectual laziness' that AI can induce. It reminded me of a podcast I was listening to about Air France flight 447, which crashed largely due to pilots not grasping what was going on with the plane, as they'd never really flown without a reliable auto-pilot (they kept trying to climb in the face of stall warnings). AI can be a great tool, but my biggest concern is what happens if the lights go out after we've trained ourselves to outsource our critical thinking to it. We are probably well down that road already in technology. In CAD and Software many engineers already don’t know the fundamentals without the tools. A real danger as a few have mentioned already is part of the laziness is blindly trusting the output. 1 Quote Share this post Link to post Share on other sites
lippy 918 #7 September 27, 2024 31 minutes ago, ryoder said: My biggest concern is naive idiots in management positions assigning tasks to AI without even understanding what it is, and the limitations. And from my miserable experience in dealing with corporate management, I know there is an inverse relationship between technical knowledge and the level a person has reached in the management heirarchy. On a different aspect of AI: "AI" has entered The Official Corporate Buzzword List. There is no regulatory body to enforce proper use of the term, so every corporate management/marketing drone is now chirping "Our product contains AI!" regardless of whether it really contains machine-learning or not. I have already seen marketing for products claiming to contain AI that I know damned well don't. At this point the term is so over-used, that claims should be interpreted as "Our product contains software that runs on a CPU". Yeah I'm dealing with a couple of software houses right now that are writing user interfaces for hardware that I'm developing. It's a frequent topic when we start talking candid with each other that they're bombarded by clients requesting they add AI to their software package...often without knowing why to add it, what it would do to better the product, or even what it is...they just want AI because it's fashionable. Quote Share this post Link to post Share on other sites
nigel99 475 #8 September 27, 2024 38 minutes ago, lippy said: Yeah I'm dealing with a couple of software houses right now that are writing user interfaces for hardware that I'm developing. It's a frequent topic when we start talking candid with each other that they're bombarded by clients requesting they add AI to their software package...often without knowing why to add it, what it would do to better the product, or even what it is...they just want AI because it's fashionable. You’ve got to think like a sales or management person. Filtering, linear regression, or bit of averaging no problem slap an AI label on it! Quote Share this post Link to post Share on other sites
jakee 1,489 #9 September 27, 2024 6 hours ago, wmw999 said: The problem is that in 10 years it’ll be better at some things, and profit-oriented management will command “the AI said to do it, so do it.” And the guardrails of people who think laterally will be removed. From what I’ve read, within well-delimited domains, AI rules. Throw in a monkey wrench, and you get wack job Wendy P. Not within ten years - now. Ai tools, with no independent verification of their accuracy or biases, are already being used in areas like law enforcement, border control, retail security to root out suspected criminals and undesirables. When a security guard kicks a black teenager out of a shopping mall because AI says he looks like a shoplifter is the guard going to care enough to check or is it just ‘the system said so’? If you get arrested and put in jail because an AI tool says you match someone with warrants then it’s probably not even good enough to check and find the error at that point. On the flip side they are apparently very good at scanning and finding abnormalities on X-Rays, with the absolute expectation that doctors will check anything that gets flagged. Quote Share this post Link to post Share on other sites
JerryBaumchen 1,363 #10 September 27, 2024 15 hours ago, nigel99 said: A real danger as a few have mentioned already is part of the laziness is blindly trusting the output. Hi Nigel, This reminds me of something Prof. Klump, my physics instructor many, many yrs ago, once told us about. A few yrs earlier, he had had a really bright young student. The task was to, with the info given, determine the height of the Great Pyramid. This student made a small mistake & came up with 25 miles high. Obviously, not possible. Klump said to always look at your results and to think about them as to could they even be in the realm of possibility. I have never forgotten that. Oh, yes; my local tv news people now seem to use AI in just every story except the weather. What a bunch of clowns. Jerry Baumchen Quote Share this post Link to post Share on other sites
SkyDekker 1,465 #11 September 27, 2024 I see a big future for AI in work and office environments. Specially when looking for information, or in situations I currently need one of my analysts to model out in excel. Sure I could ask my assistant to find a specific piece of information somewhere, but how much quicker would it be if I can just ask our office AI? The current version of the Microsoft AI offering is already getting close to being able to do that. Quote Share this post Link to post Share on other sites
okalb 104 #12 September 27, 2024 (edited) I have spent the last 8 months working on a Data Governance project. One of the biggest parts of the project is the integration with AI. It took me at least a month to repeatedly explain that when we talk about data governance and AI, it isn't about "integration with" but about "protection from." I have found it very difficult to get management to understand that the biggest concern of AI in a corporate environment is protecting internal corporate data from leaking into the AI learning model and your confidential data becoming part of the model. The introduction of "AI" to the masses has changed a lot of the concepts and previous best practices concerning data governance. Edited September 27, 2024 by okalb Quote Share this post Link to post Share on other sites
SkyDekker 1,465 #13 September 27, 2024 15 minutes ago, okalb said: I have spent the last 8 months working on a Data Governance project. One of the biggest parts of the project is the integration with AI. It took me at least a month to repeatedly explain that when we talk about data governance and AI, it isn't about "integration with" but about "protection from." I have found it very difficult to get management to understand that the biggest concern of AI in a corporate environment is protecting internal corporate data from leaking into the AI learning model and your confidential data becoming part of the model. The introduction of "AI" to the masses has changed a lot of the concepts and previous best practices concerning data governance. 100% correct. My biggest fear as we work through introducing AI internally. Microsoft claims their Ai "Copilot" can be integrated into sharepoint, would not share data outside the organization AND allows for restricting access to information by user. We are currently testing it for meeting minutes and tasks, which has been pretty decent. It listens to the meeting, keeps a transcript and automatically assigns tasks to people as decided in the meeting. Have been trying to get my owners to understand that employees having access to chatgpt means that likely company information is likely being shared on that platform. Quote Share this post Link to post Share on other sites
okalb 104 #14 September 27, 2024 2 minutes ago, SkyDekker said: 100% correct. My biggest fear as we work through introducing AI internally. Microsoft claims their Ai "Copilot" can be integrated into sharepoint, would not share data outside the organization AND allows for restricting access to information by user. This is exactly what I have been working on. Microsoft Purview is part of M365 (depending on licenses). Purview is Microsoft's data governance product. If all of the exfiltration settings are configured correctly. It can be relatively well protected. The problem is that is is also very easy to misconfigure it and not realize it. The project I have been working on is training for internal MS security engineers on how to properly configure their customer's environments to allow the use of Copilot, while protecting confidential data from being leaked. We have had to rewrite many of the standards and best practices along the way. 1 Quote Share this post Link to post Share on other sites
SkyDekker 1,465 #15 September 27, 2024 44 minutes ago, okalb said: This is exactly what I have been working on. Microsoft Purview is part of M365 (depending on licenses). Purview is Microsoft's data governance product. If all of the exfiltration settings are configured correctly. It can be relatively well protected. The problem is that is is also very easy to misconfigure it and not realize it. The project I have been working on is training for internal MS security engineers on how to properly configure their customer's environments to allow the use of Copilot, while protecting confidential data from being leaked. We have had to rewrite many of the standards and best practices along the way. I'll look forward to use the fruits of your labour! 1 Quote Share this post Link to post Share on other sites
GeorgiaDon 362 #16 September 29, 2024 Regarding a more mundane application of “AI”, I help to moderate a forum that is largely devoted to fossil identification. Lately we are getting lots of posts from people who are confused by identifications given by a google ap. Most of the ID suggestions are so far off as to be comical. One poster was concerned that google misidentified a common fossil as a toxic mineral, and was insistent that they needed to go to the hospital immediately. I’m sure it’s a challenge to get an algorithm to parse out the meaningful data from a poorly photographed image, make appropriate comparisons to hundreds of thousands (or millions) of possibilities, and make a plausible suggestion as to an identification. However we have many humans on the forum who are excellent at doing just that. Quote Share this post Link to post Share on other sites
jakee 1,489 #17 September 29, 2024 2 hours ago, GeorgiaDon said: One poster was concerned that google misidentified a common fossil as a toxic mineral, and was insistent that they needed to go to the hospital immediately. Clearly their definition of 'immediately' is not the same as mine ; p Quote Share this post Link to post Share on other sites
ryoder 1,590 #18 September 30, 2024 Someone has finally found a task AI can do reliably! LOL! AI researchers demonstrate 100% success rate in bypassing online CAPTCHAs Quote Share this post Link to post Share on other sites
okalb 104 #19 September 30, 2024 If you trust the answers that you get from AI, I recommend asking it this simple question: How many Rs are in the word strawberry? Quote Share this post Link to post Share on other sites
johnhking1 96 #20 September 30, 2024 If AI searches the internet for information, if you post enough bad or inaccurate information will AI come up with bad answers. Quote Share this post Link to post Share on other sites
gowlerk 2,193 #21 September 30, 2024 35 minutes ago, johnhking1 said: If AI searches the internet for information, if you post enough bad or inaccurate information will AI come up with bad answers. So it's like what you would have if Wikipedia had no editors. 1 Quote Share this post Link to post Share on other sites
ryoder 1,590 #22 October 3, 2024 A Princeton prof and his grad student have just released a new book: AI Snake Oil: What Artificial Intelligence Can Do, What It Can’t, and How to Tell the Difference I am currently watching them being interviewed by Adam Conover. They have confirmed everything I said in Post #4, and more: Their website: AI Snake Oil Quote Share this post Link to post Share on other sites
jakee 1,489 #23 October 3, 2024 On 9/30/2024 at 11:13 PM, johnhking1 said: If AI searches the internet for information, if you post enough bad or inaccurate information will AI come up with bad answers. It will also come up with bad answers all on its own. It doesn't just copy and paste information from the web, right? It generates new sentences and paragraphs to convey that information to you. But it doesn't know what any of the things it says to you actually mean. It has no human ability to do a gross error check on whether the information sounds right, it's just putting together sentences that sound good in the desired language style. So a lawyer using AI to write briefs was found out because they were riddled with references to caselaw and even circuit courts that simply don't exist, and AI cookbooks will tell you how to make a lovely chinese inspired chicken, strawberry jam and garlic lasagne. Quote Share this post Link to post Share on other sites
jakee 1,489 #24 October 3, 2024 4 hours ago, ryoder said: I am currently watching them being interviewed by Adam Conover. They have confirmed everything I said in Post #4, and more: Oh no, he has developed Podcast Man Voice! Quote Share this post Link to post Share on other sites
nigel99 475 #25 October 3, 2024 10 minutes ago, jakee said: It will also come up with bad answers all on its own. It doesn't just copy and paste information from the web, right? It generates new sentences and paragraphs to convey that information to you. But it doesn't know what any of the things it says to you actually mean. It has no human ability to do a gross error check on whether the information sounds right, it's just putting together sentences that sound good in the desired language style. So a lawyer using AI to write briefs was found out because they were riddled with references to caselaw and even circuit courts that simply don't exist, and AI cookbooks will tell you how to make a lovely chinese inspired chicken, strawberry jam and garlic lasagne. I’ve seen a few articles saying that AI has suggested cooking recipes for Mustard gas and similar. The whole point is it is ‘supposed’ to be an aid, you still need intelligence and to verify the results. There is a really interesting BBC article on mapping the brain of a fly for the first time and AI was an enormous help - but it made 3 Million mistakes that had to be corrected by hand. Quote Share this post Link to post Share on other sites