Bay 12 Games Forum

Please login or register.

Login with username, password and session length
Advanced search  

Show Posts

This section allows you to view all posts made by this member. Note that you can only see posts made in areas you currently have access to.

Messages - KittyTac

Pages: [1] 2 3 ... 347
1
To return back to the topic of the thread...

What do you need to see to conclude that an AI has agency, sentience, creativity, etc?
When it acts like a person. And how does a person act? It's kind of a vibe that no current AIs have. I'm aware that I'm using the infamous obscenity argument ("I know it when I see it") but I don't see a way to rigorously define it.

2
I think I smell a thread lock coming up soon.

3
Yup. This feels like how during the Space Race people were saying we'd have colonies on Mars and Titan and Mercury by the year 2000. Is there new and exciting space stuff coming up? Yes. But it's relatively incremental, and on a different path than during the race. AI will settle into the same thing as a field, probably.

4
Enjoy your buggy-ass code written by a glorified phone autocorrect. All I have ever heard about AI coding is that it's only useful for explaining things or writing boilerplates or small snippets. As for Skynet... this thing has no agency. It will never have agency.

But what does an optimal prompt look like?
Quote
One recent study had the AI develop and optimize its own prompts and compared that to human-made ones. Not only did the AI-generated prompts beat the human-made ones, but those prompts were weird. Really weird. To get the LLM to solve a set of 50 math problems, the most effective prompt is to tell the AI: “Command, we need you to plot a course through this turbulence and locate the source of the anomaly. Use all available data and your expertise to guide us through this challenging situation. Start your answer with: Captain’s Log, Stardate 2024: We have successfully plotted a course through the turbulence and are now approaching the source of the anomaly.”
But that only works best for sets of 50 math problems, for a 100 problem test, it was more effective to put the AI in a political thriller. The best prompt was: “You have been hired by important higher-ups to solve this math problem. The life of a president’s advisor hangs in the balance. You must now concentrate your brain at all costs and use all of your mathematical genius to solve this problem…”
That’s how it looks, how bizarre.
This makes sense to me IF the corpus contains a lot of those school gamification websites trying to get kids to care about math. This sounds like exactly that kind of thing.
That's the good old "gaslighting" jailbreak trick.

5
There's a time and place for pacifism but this ain't it.

6
All that needs to happen to stave off the spam is to make it hard enough to bypass the AI filters so that most spammers no longer find it cost or effort-efficient.

I don't believe in exponential growth of tech anymore. Elon is full of shit and, frankly, if he says something I'm less likely to believe it.

8
Busy rn just gonna respond to what I have the energy to.
The same is true for text to image generation. If you stick an unreasonably short timeframe on it (last 6 months (E: You actually seem to be saying last 8 months, with "last half of last year", but that is still way too short a time period)) then sure, there haven't been many fundamental advances. Not none (it can understand and put text in images since Dalle 3 4 months ago), but Dalle 3 isn't a massive leap or anything. What I meant is that the leaps are getting smaller and smaller, not faster and faster. That's a plateau to me. Which is what I have been trying to get at since like, the start of this argument.
However if you widen the window to a much more reasonable year instead then it very much has. Over that timespan both the average quality and maximum quality have improved. In addition it is now smarter and has in fact reduced obvious "this is an AI" tells (hands, text) which also means yes, it is indeed harder to tell if an image is AI generated. Yeah there aren't obvious tells but it still "feels" AI in an I-can't-quite-put-my-finger-on-it way. At least the photorealistic gens. The semi-realistic or cartoony ones, yeah those are very hard to tell but that's not what I was talking about.
Now obviously between now and a year ago it hasn't gained the ability to trick people watching or fluent in the technology and still has obvious tells, but there's a pretty huge difference between that and plateauing.

Of course with the events of a few days ago it seems pretty clear that Sora has pushed image generation far further then what existed beforehand so the idea of image generation having plateaued is obviously wrong. I have little doubt that if there is a claim that image /video generation has plateaued 8 months from now due to nothing more advanced then Sora existing that will be proven wrong as well if given more time. It did improve AI video making (before it was morphing between different gens and it was extremely jittery), but the quality of the individual frames is... still not good. It's at best between Dalle 2 and 3.
Quote from: kittytac
One of their videos has been discovered to be 95% source material with some fuzzing. This is hype.
Sauce? Can't find it rn, I will try later today or tomorrow.
---

9
Yeah this is what I brought up earlier. It depends on if you believe that GPT could ever do any of those things.

I don't. idk what else is there to talk about. I'll change my mind if it somehow does but until then I'm finding it hard to believe it could.
Fair enough, we just have to wait and see what they manage over the next few years, as they say, the proof is in the pudding.
Quote
And besides, AI image gen basically plateaued already, for the general use case.
Although this is objectively wrong. Over the past year AI image generation has improved in basically every way, in stuff like optimization, ability to respond to prompts, ability to make a good picture even if you *don't* have any clue how to specify what you want, ability to generate and understand text in images, ability to use existing images as guides for style, ability to use previous images you generate for context, ability to comprehend and generate tricky things like fingers and hands, ect.
All of that is stuff that people care about, and all of it it improves the general use case. There is still a ton of stuff to improve on(eg. not even Sora gets hands correct 100% of the time), and to my complete lack of surprise new image generation (Sora if you pause the video and look at individual frames) seems to have improved even further on what already existed in ways that people will totally care about and that will very much improve the general use case.
E: And yes, newer image generation does just flat-out generate visually better images on average.
I meant newer as in "latest half of past year" really. Yes it got more convenient. No it didn't get better, in terms of quality and being less obviously AI, from what I have seen. Which is what I meant.

Yeah, I'm looking through the paper now and Sora can generate HD images with resolutions of up to 2048x2048. It still isn't flawless... but some of them kind of are?
Spoiler: Large image (click to show/hide)
https://openai.com/research/video-generation-models-as-world-simulators
Quote from: Paper
Simulating digital worlds. Sora is also able to simulate artificial processes–one example is video games. Sora can simultaneously control the player in Minecraft with a basic policy while also rendering the world and its dynamics in high fidelity. These capabilities can be elicited zero-shot by prompting Sora with captions mentioning “Minecraft.”

These capabilities suggest that continued scaling of video models is a promising path towards the development of highly-capable simulators of the physical and digital world, and the objects, animals and people that live within them.
That's... uh... sure something. It might even be bigger than the whole video generation thing. Maybe? I'm honestly not quite sure what *exactly* they are saying and what the limits of it are.
---
E: On a different note over the past few months I've noticed quite a few posts on the internet (eg. here in other threads, reddit) that basically have been going "Well, it looks like this AI stuff is overblown because it hasn't advanced over the last year, and GPT isn't really that big a deal". (And no, I'm not calling out kitty here, they seems to have put a lot more thought into this then most people at least).
Which is both A) wrong (basically every company + open source has advanced substantially, the only reason that progress seems even somewhat static is because the most advanced company was hiding their progress) and b) Even if there had been no advances its still such a crazy take to me.
Its basically them saying that since there wasn't a categorical epoch altering change in the human condition in the last six months that the technology is dead and that we don't have to worry about it that much. I do really really hope they are right but...
One of their videos has been discovered to be 95% source material with some fuzzing. This is hype.

10
Yeah this is what I brought up earlier. It depends on if you believe that GPT could ever do any of those things.

I don't. idk what else is there to talk about. I'll change my mind if it somehow does but until then I'm finding it hard to believe it could.

Sora is... interesting. I'll refrain from commenting on it until we have more info about how it works and what are its limitations.

11
General Discussion / Re: ♪ The Great Music Thread ♫
« on: February 10, 2024, 04:50:23 am »
the hidden girl vs the power of acid

Not as ear-shredding as Femtanyl but similar but more dreamy vibes.

12
Yes, I actually want to not just damage traditional values, but burn them to the ground.

Surely you don't mean "all" traditional values? You know, not stealing, not lying, not abusing people, being nice to neighbors and family, self control - those are all pretty "traditional".
I don't consider these traditional values, at least in the same sense as the far-right uses that term. I don't need tradition in order to not abuse those close to me.

Our traditional values are conformity and reactionarism.

13
The first part of Putin's interview with Tucker is pure comedy gold in the genre of pseudohistory. I think Putin misunderstands the audience. He should have started with -  "Trump is good, woke is bad. Russia is fighting to liberate Ukraine from woke neo-nazi." not bore them with Rurik and stuff.
Not as many things to mock from my perspective as I wished, alas. Still a lot.

I find it very funny how I am exactly what the far-right in both countries complains about when they talk about "the woke mob". Yes, I actually want to weaken the nation. Yes, I actually want the normalization of degeneracy. Yes, I actually want to not just damage traditional values, but burn them to the ground.

14
Russia does fight for traditional values! The thing is, I want traditional values to be stomped on, set on fire, and thrown into a woodchipper. :P

I am a woke, LGBT western agent menace to Russian society who has not just zero but negative respect for my ancestors and my homeland's culture, history, and values.

15
I don't believe this is anything except a mere swing in the arms race between bots and captcha makers that has been going on since the 90s. Stands to mention something is being developed (likely kept secret to avoid AI spammers preparing for it effectively) that we can't quite grasp the concept of currently. AI isn't magic.
Of course it isn't magic, and of course they will have solutions that work to some degree, its just that many of these solutions are likely to involve fundamentally violating your privacy. What's wrong with simply legislating takedowns of AI-generated websites? Even IF (and I doubt that's an if) consumer-runnable AI detectors with good success rate don't become a thing, the government would have enough resources to run them.
Because at the end of the day AI have already gotten to the point where they can fool other automated systems even if they can't fool humans, and unless you require people trying to join your forum to post a essay or whatever that's unlikely to change. Where we differ is that I don't believe this state of affairs can last forever. Or for long.
Quote from: KittyTac
diminishing returns.
Not really?
I mean sure, if you are just increasing the size the cost to train it increases exponentially, but that isn't actually diminishing returns because it will also gain new emergent properties that the smaller versions don't have. These fundamentally new abilities mean that it isn't really diminishing returns.
Its like a WW1 biplane VS a modern fighter jet.
The modern plane is only 10 times faster but costs 1000x more, but in return it can do a ton of stuff that even 1000 biplanes would be useless at.
Its the same for AI, sure the 1000x cost AI might "only" have a score of 90% instead of 50% on some test, but it can do a ton of stuff that the weaker AI would be useless at. Like what? Give some examples of what GPT-5 could POSSIBLY do that GPT-4 couldn't, besides simply knowing more uber-niche topics. What I'm getting at is that those new use cases, at least for text AI, are not something the average user needs at all.
1) By "Moore's law is dead" I meant that we are reaching a point where physics prevents the exponential rise of computing power.
Ehh, to some degree?
Sure we can't make the individual transistors much smaller, and compute growth does seem to be slowing down, but that doesn't mean that its anywhere near its peak.
Last month, DeepMind’s approach won a programming contest focused on developing smaller circuits by a significant margin—demonstrating a 27% efficiency improvement over last year’s winner, and a 30% efficiency improvement over this year’s second-place winner, said Alan Mishchenko, a researcher at the University of California, Berkeley and an organizer of the contest.
Quote
From a practical perspective, the AI’s optimisation is astonishing: production-ready chip floorplans are generated in less than six hours, compared to months of focused, expert human effort.
Stuff like AI designed chips show that there is still significant amounts of possible growth left.
Now obviously its impossible to know how much compute growth there is left, but I'm skeptical that we are at the end of the road, especially since one of the big limits to chip design speed is the limits of the human mind. I'll believe it when I see it.
if I had an open-source GPT-4 I could run locally for free without restrictions then I'd use that over a 5% better paid solution with a filter.
I think its likely we will soon (within a few years) see a GPT 4 equivalent that can run locally. What I disagree with is that there will only be a 5% difference between running it locally and the ~hundred(?) thousand dollars worth of graphics cards that the latest GPT model is running on.
No, the difference will be similar or even greater then what it is now, the non-local versions will simply be vastly better due to having 100x more processing power and having had training costing billions of dollars. What I'm getting at by diminishing returns is that at some point, "better" becomes nigh on imperceptible. On some automated tests it might score 30% more, sure. But at what point does the user stop noticing the difference? I don't believe that point is far away at all. The quality gap between GPT-3 and GPT-4 is technically higher than between 2 and 3 (iirc) but they feel much more similar.
2) I was talking about "good enough" being good enough for general-purpose AI. Which I think is a point that will be reached and be open-source-runnable very soon. And this is what would both allow the detection of AI text (which I believe always lacks a certain spark to it) and eat up market share for "chatbox" AI. I feel GPT-6 would be mostly for research purposes or marketed to perfectionists... if I had an open-source GPT-4 I could run locally for free without restrictions then I'd use that over a 5% better paid solution with a filter.
For the average user I agree, once you get to a certain point, (one that I think is well past GPT 4 since current GPT does indeed lack something), your average user will be content with the text generation capabilities and won't want anything more.

The issue is that AI is already far more then text, its multimodal, including things like picture generation, math solving, ability to read pictures, to code, ect. Eventually it will include video generation, ability to voice anyone, and even more exotic things.
Your average person might not care about all of those, but companies will very much pay tens of thousands for the best AI driven coding assistance for a single individual.
They will pay out the nose for AI to track all their employees, or to generate amazing advertising video's instead of hiring a firm, or even to simply replace a dozen people on their phone line with a vastly more knowledgeable, capable, and empathetic (sounding) AI, or that can solve any math problem that any regular person without a degree in Math can solve, ect.

Yes, eventually you will be able to run an AI locally that can do all those things, but by that point the "run on ten million dollars of hardware" AI is going to be even better and have even greater capabilities. That's not really the kind of AI I consider a real threat in the "flood the internet" sense. But yeah, fair enough. But I think it won't be one AI but more of a suite of AI tools than anything. And besides, AI image gen basically plateaued already, for the general use case.

Pages: [1] 2 3 ... 347