Bay 12 Games Forum

Please login or register.

Login with username, password and session length
Advanced search  
Pages: 1 ... 68 69 [70] 71 72 ... 157

Author Topic: Tech News. Automation, Engineering, Environment Etc  (Read 239201 times)

Sheb

  • Bay Watcher
  • You Are An Avatar
    • View Profile
Re: Tech News. Automation, Engineering, Environment Etc
« Reply #1035 on: November 08, 2017, 02:44:36 pm »

Yeah, there's plenty of information out there, but you don't need that information to form an opinion and then defend it to the death.

Can I sig this?
Logged

Quote from: Paul-Henry Spaak
Europe consists only of small countries, some of which know it and some of which don’t yet.

Trekkin

  • Bay Watcher
    • View Profile
Re: Tech News. Automation, Engineering, Environment Etc
« Reply #1037 on: November 08, 2017, 03:19:12 pm »

Can I sig this?

Sigged!

You sigged him asking if he can sig someone else's post?
Logged

scourge728

  • Bay Watcher
    • View Profile
Re: Tech News. Automation, Engineering, Environment Etc
« Reply #1039 on: November 08, 2017, 06:50:57 pm »

No, Sheb, I refuse to be sigged yet again.  I think there's three people running around with me in their sigs, I don't think I could take the stress of a fourth.  My skin crawls just imagining it...
sigged

Jopax

  • Bay Watcher
  • Cat on a hat
    • View Profile
Re: Tech News. Automation, Engineering, Environment Etc
« Reply #1040 on: November 09, 2017, 05:30:56 pm »

AI learning cuts down render time from dozens of hours to minutes.

Doubtful if they'll ever release it for wider use since the commercial application is pretty damn huge in terms of time saved but it's still a damn cool thing to see happen.
Logged
"my batteries are low and it's getting dark"
AS - IG

McTraveller

  • Bay Watcher
  • This text isn't very personal.
    • View Profile
Re: Tech News. Automation, Engineering, Environment Etc
« Reply #1041 on: November 09, 2017, 07:55:33 pm »

AI learning cuts down render time from dozens of hours to minutes.

Doubtful if they'll ever release it for wider use since the commercial application is pretty damn huge in terms of time saved but it's still a damn cool thing to see happen.
Ersatz clouds!  Awesome!

It's actually... interesting.  I wonder if it could somehow apply to all those discussions about people wondering about the computational power required to "compute" the universe. I've heard some people hand-wave it as "the universe isn't computed, it just evolves" but perhaps something like this AI stuff applies.  For instance, if all you had was the output of the ANN-produced clouds, would you come up with the physics behind cloud scattering?  Or does it only work the other way - where you have to already know how a cloud works to make a neural network that can make things that look like clouds, but aren't really clouds.
Logged

Trekkin

  • Bay Watcher
    • View Profile
Re: Tech News. Automation, Engineering, Environment Etc
« Reply #1042 on: November 09, 2017, 08:12:04 pm »


It's actually... interesting.  I wonder if it could somehow apply to all those discussions about people wondering about the computational power required to "compute" the universe. I've heard some people hand-wave it as "the universe isn't computed, it just evolves" but perhaps something like this AI stuff applies.  For instance, if all you had was the output of the ANN-produced clouds, would you come up with the physics behind cloud scattering?  Or does it only work the other way - where you have to already know how a cloud works to make a neural network that can make things that look like clouds, but aren't really clouds.

In brief (and knowing full well that I'm opening myself to being barraged by a million poorly written gee-whiz-robots articles from Wired or wherever), neural networks are excellent classifiers in part because they're efficient at pruning down huge state spaces in ways that aren't immediately apparent to human programmers -- which also means that it's not necessarily possible to learn anything transferable from the final network topology. You don't need to know how a cloud works, but you do need to know how to write code that can stochastically produce clouds.
Logged

Reelya

  • Bay Watcher
    • View Profile
Re: Tech News. Automation, Engineering, Environment Etc
« Reply #1043 on: November 10, 2017, 09:39:29 pm »

Well we're not actually pushing the limits with neural network topologies, are we? So you'd expect to have limited outcomes.

Pretty much everyone even "deep learning" people are in fact just using one-way signal processing which feeds one layer into the next, until it reaches an "output" layer. And then you have some "training" process that alters the weights, but this "training" process is applied external to the network itself. We haven't in fact worked out how to create a "learning" system which is itself part of the normal operation of a neural network.

Consider that real-life neural networks don't even have "layers" and that they have loopbacks, memory and a concept of signals flowing through the network over time, plus they are truly self-teaching: there is no need for a training harness which is outside the network. None of those things are true of the current "deep learning" stuff.

Right now the emphasis is just on adding more layers, more processing power, more training data to get the most out of our one-way layered networks. "deep learning" is just a buzzword that means they have more layers, thus they need more processing power. The network topology isn't actually any less simplistic. There's nothing "deep" in terms of "more complex networking" going on. At some point that's clearly going to come up against some sort of wall where merely adding more processing power and bigger training sets isn't cost-effective vs coming up with smarter topologies.

The types of neural networks we have now are equivalent to having a pocket calculator and not knowing how to use any buttons except "+", "-", "=" then blaming the calculator when it's difficult to calculate a multiplication or division. We're just not exploring a whole lot of the design-space of neural networks at all.
« Last Edit: November 10, 2017, 09:50:07 pm by Reelya »
Logged

bloop_bleep

  • Bay Watcher
    • View Profile
Re: Tech News. Automation, Engineering, Environment Etc
« Reply #1044 on: November 11, 2017, 02:11:57 am »

Well we're not actually pushing the limits with neural network topologies, are we? So you'd expect to have limited outcomes.

Pretty much everyone even "deep learning" people are in fact just using one-way signal processing which feeds one layer into the next, until it reaches an "output" layer. And then you have some "training" process that alters the weights, but this "training" process is applied external to the network itself. We haven't in fact worked out how to create a "learning" system which is itself part of the normal operation of a neural network.

Consider that real-life neural networks don't even have "layers" and that they have loopbacks, memory and a concept of signals flowing through the network over time, plus they are truly self-teaching: there is no need for a training harness which is outside the network. None of those things are true of the current "deep learning" stuff.

Right now the emphasis is just on adding more layers, more processing power, more training data to get the most out of our one-way layered networks. "deep learning" is just a buzzword that means they have more layers, thus they need more processing power. The network topology isn't actually any less simplistic. There's nothing "deep" in terms of "more complex networking" going on. At some point that's clearly going to come up against some sort of wall where merely adding more processing power and bigger training sets isn't cost-effective vs coming up with smarter topologies.

The types of neural networks we have now are equivalent to having a pocket calculator and not knowing how to use any buttons except "+", "-", "=" then blaming the calculator when it's difficult to calculate a multiplication or division. We're just not exploring a whole lot of the design-space of neural networks at all.

It really, really isn't as simple as you say. For one thing, there are many variations of neural networks. For example, you can build a neural network that takes as input the previous character and outputs the next character. You can then feed it a bunch of text to train it -- due to its design, instead of learning about the text as a whole (which might be very difficult to analyze), it'll learn which combinations of characters are common (so if it sees a 'q', it'll "know" that the next character is more likely to be an 'u' than a 'z'.) This is the mechanism behind Cleverbot and other such chatbots. If Cleverbot was implemented with a neural network that takes the text as a whole instead of word by word it would make much less sense.

If deep learning was as simple as linking up more and more layers as you describe, research in the area would be much more dormant.
Logged
Quote from: KittyTac
The closest thing Bay12 has to a flamewar is an argument over philosophy that slowly transitioned to an argument about quantum mechanics.
Quote from: thefriendlyhacker
The trick is to only make predictions semi-seriously.  That way, I don't have a 98% failure rate. I have a 98% sarcasm rate.

Reelya

  • Bay Watcher
    • View Profile
Re: Tech News. Automation, Engineering, Environment Etc
« Reply #1045 on: November 11, 2017, 09:14:23 am »

I don't think you get what I'm talking about in regards to network topology btw.

How do you actually think neural networks operate btw? do you know about network topology?

"cleverbot" is not a neural network - it's just a plain old keyword search thingy that spits out pre-written replies, and the usual way you do what you're talking about with letter frequency is called a Markov Chain. Neither of those have anything to do with neural networks. So those examples are no good as examples.

The thing is exactly what I was talking about - neural networks refer to a specific technology. not cleverbot and markov chains. Those aren't even done with neural networks.

https://en.wikipedia.org/wiki/Deep_learning#Deep_neural_networks

Quote
A deep neural network (DNN) is an ANN with multiple hidden layers between the input and output layers.[10][2] Similar to shallow ANNs, DNNs can model complex non-linear relationships.

That's it - that's the entire definition of "deep learning". Traditional neural networks only had one layer of "processing" nodes in them. "Deep Learning" is all about working out how to work with multiple layer networks. They're still entirely one-way signal processing things and they are dependent on "outside" code to do the actual learning. The "learning" part isn't actually part of the network itself, any more than it was for the one-hidden-layer networks. They don't have memory, state, nor do they have any feedback loops inside the network. Nobody knows how to design networks with those yet - e.g. nobody knows how to make a "living" network in which signals propagate around and it has memory and state.

That's why I'm saying it's become a "buzz word" that people think is "a magic box" when in fact the topologies aren't any more complex than before, they've just thrown more layers in, added more cells, and pour more "big data" into the same old traditional dumb network designs, purely because we have more processing power and it's cheaper to scale up a stupid network than to design a clever one.

Back to your example with letter-inputs. you could use a neural network to learn a mapping from one letter to the next, however the network, even a "deep learning" one has no state - it has no such thing as memory. So it cannot say "oh the last letter fed to me was a Q and this one is a U" and react differently because of that. All it knows is that it was hard-wired to respond to "Q" with one output and "U" with a different output. So no, you can't in fact get a meaningful word-processing NN by feeding it a single letter at a time. All you could teach that network is that "T is followed by H" but it will always follow T with H if you teach it that. The Neural Network doesn't have state and it cannot remember what order it was taught the sequence, so it has no information about that whatsoever. If you want a neural network to do something more advanced that simply spit out "H" every time it sees "T" then you need to hand-design it to do exactly what you ask, and it will almost certainly not be able to do any more than what you explicitly design it to do. You seem to view NNs as a "magic box" that you feed a stream of letters to and it somehow makes sense of them. It just don't work like that.
« Last Edit: November 11, 2017, 10:07:53 am by Reelya »
Logged

bloop_bleep

  • Bay Watcher
    • View Profile
Re: Tech News. Automation, Engineering, Environment Etc
« Reply #1046 on: November 11, 2017, 12:08:33 pm »

Oh. I could've sworn that was the algorithm behind Cleverbot...

But I do think you misunderstood me as well. I wasn't talking about something with state; I was talking about something which takes as input the previous character/word in the sequence and outputs a prediction of the next one. You would feed it some text by inputting each character of the text into it separately, and then adjusting the weights by comparing its prediction and the actual next character. So if you have the word "the," you would first input the "t", get a "z", adjust the weights so it's more likely to output an "h", then feed it the "h", adjust it so it's more likely to output an "e", etc. If you want to use this neural network to generate text after it's been trained, then you would simply give it a first character, and then repeatedly use it on the last character it outputted to generate a new character, until you have a string of the desired length. It's the principle behind many random text generators out there.
Logged
Quote from: KittyTac
The closest thing Bay12 has to a flamewar is an argument over philosophy that slowly transitioned to an argument about quantum mechanics.
Quote from: thefriendlyhacker
The trick is to only make predictions semi-seriously.  That way, I don't have a 98% failure rate. I have a 98% sarcasm rate.

alway

  • Bay Watcher
  • 🏳️‍⚧️
    • View Profile
Re: Tech News. Automation, Engineering, Environment Etc
« Reply #1047 on: November 11, 2017, 03:27:03 pm »

The name for stateful, looped neural networks used for that sort of task is Recurrent Neural Network (RNN). Which do in fact have state and temporal loops. This is pretty much the most widely known, accessible explanation of those: https://karpathy.github.io/2015/05/21/rnn-effectiveness/ "The Unreasonable Effectiveness of Recurrent Neural Networks"
Which also mentions ongoing problem explorations in related topics towards the end, and stuff like LSTM memory models.

Likewise, much of the actual progress in deep learning has been through such network specialization. Convolutional Neural Networks have been extremely successful in image related tasks, as the topology of the network is designed around spacial invariance combined with hierarchical composition of features.

"Deep Learning" is of course a highly effective but ultimately very nondescriptive marketing term. But the actual research into making Deep Learning effective in actual application does very much work on adapting the network topologies to the problem in question. It's precisely that sort of research which has lead to recent successes and makes the difference between 'works after a day of training' and 'it would be faster to seed a planet with amino acids and ask them to solve it after they evolve to intelligence.'
Logged

Reelya

  • Bay Watcher
    • View Profile
Re: Tech News. Automation, Engineering, Environment Etc
« Reply #1048 on: November 11, 2017, 08:06:37 pm »

Oh. I could've sworn that was the algorithm behind Cleverbot...

But I do think you misunderstood me as well. I wasn't talking about something with state; I was talking about something which takes as input the previous character/word in the sequence and outputs a prediction of the next one. You would feed it some text by inputting each character of the text into it separately, and then adjusting the weights by comparing its prediction and the actual next character. So if you have the word "the," you would first input the "t", get a "z", adjust the weights so it's more likely to output an "h", then feed it the "h", adjust it so it's more likely to output an "e", etc. If you want to use this neural network to generate text after it's been trained, then you would simply give it a first character, and then repeatedly use it on the last character it outputted to generate a new character, until you have a string of the desired length. It's the principle behind many random text generators out there.

The problem is that your example is fundamentally broken: you're not taking into account that the learning algorithm for a stateless NN does not know what order it was trained for the inputs, that is information that you have but the NN does not. You're only making some assumption that if you put the inputs in, in the "right" order then the NN must learn something about that order. It doesn't work like that. Training data is in input/output pairs and the machine learns only a mapping from one set to the other. You can randomize the order of inputs and get identical results, which proves that the "order" the training data occured in wasn't actually represented in your schema.

e.g. if you teach it the word "THE", and teach it that "T" is followed by "H" that's all well and good, but what happens when "T" occurs in different words? You've hard-wired the thing to write "THE" but if you teach it "TAKE" and "TO" then that overwrites the original training for the letter "T". So you might say "but those T's were in a different context ..." however that's the problem. You're only training it on pairs such as "T is followed by H". It doesn't actually have any other context other than what you build into the training pairs.

~~~

And you'll find that almost all random text generators actually use Markov Chains and not Neural Networks, because Markov Chains actually conserve all the pattern information rather than overwriting it like a neural network does. Basically ... I'm assuming here that you're just entirely confused about the terminology and you've actually read about how Markov chains work. They're not anything to do with Neural Networks at all, and they are in fact the go-to method for random text generation.

Markov chains can work per letter - but good results are when you give it three letters and ask it give you the next letter based on probability. Probability is the reason people use Markov chains chains here and not neural networks. NNs learn "input A always gives input B" whereas Markov Chains work on "30% of the time T leads to H, 55% of the time T leads to O ..." and so on, which is information that your one-letter-at-a-time Neural Network isn't guaranteed to preserve.

However, even the three-letter-ahead Markov chains just produce gibberish. The real benefits are when the "units" are whole words, then you basically make a statistical table of word predictions looking three-words ahead, and you start to get believable text. A neural network being fed one letter at a time is not capable of producing anything like that, and neural networks have continuous "analogue" style input/output.

e.g. for the "letters" thing the best method for an Neural Network would be to have 26 separate inputs - one for each letter of the alphabet and 26 separate outputs - also one per letter. You'd then teach to output a value of 1.00 on the "H" output whenever the "T" input has a value of 1.0. However, for this method teaching it even "TH is followed by E" would need 676 inputs (26*26) ... which is already getting intractable for a problem that's trivially solvable with a Markov Chain.
« Last Edit: November 11, 2017, 08:46:40 pm by Reelya »
Logged

USEC_OFFICER

  • Bay Watcher
  • Pulls the strings and makes them ring.
    • View Profile
Re: Tech News. Automation, Engineering, Environment Etc
« Reply #1049 on: November 11, 2017, 09:33:16 pm »

Except you can use neural networks to generate text where context plays an important role. And I'd like to mention that this one was trained one word at a time, using cards translated into text files. Before anyone mentions anything, the twitter account has cards from a good RNN, okay RNN and bad RNN, which is why some are more gibberish than others Like Alway pointed out right above your post, Recurrent Neural Networks exist and solve many of the problems you're talking about. Unlike forward feed neural networks their topography forms loops which means that the additional feedback allows it to store information about the current state and so on so forth.

Also, I'm pretty sure that most text generators use Markov Chains because they're easier to impliment and test. The problem with them is that they're inherently stateless by definition, which is to say that they don't retain context. That's why they generate gibberish in the three-letter ahead chain and ramble in the three-words ahead chain. The only context they have comes from the current state they're working with. Three words contain more context than three letters do, but even that isn't good enough to generate long stretches of believable text or something with large amounts of context like a M:tG card. T might have a 30% chance of being followed by H in all cases of T, but if the T immediately comes after 'bandwid' then the chances of it being followed by H shoots up considerably. And that's information that a Markov Chain can't consider without greatly increasing the size of the current state and thus the memory and calculations needed to form the Chain.
Logged
Pages: 1 ... 68 69 [70] 71 72 ... 157