Top.Mail.Ru
? ?
Image someone's Journal
 
[Most Recent Entries] [Calendar View] [Friends]

Below are the 20 most recent journal entries recorded in someone's LiveJournal:

[ << Previous 20 ]
Tuesday, February 6th, 2024
10:52 pm
First Mover Advantage

In my last post I wrote it does not matter who will develop AGI first. This is incorrect. OpenAI is on the verge of developing a model (GPT-5) smart enough to accelerate the development of its successor. At that moment, they — OpenAI — will rapidly start accumulating an unlimited power. I wonder if some external forces (mainly US government) will be able to intervene in time to take over control of the future "supermodel". Looking at the latest survey about AGI development timeline, it's clear the government does not expect the progress to come so soon, and will miss their chance. There might be smart and powerful people aware of what is happening, and ready to take action. But after discussing these ideas with several smart people I know, I doubt it. As a result, the power will be concentrated in the hands of the small group of people at OpenAI (and possibly at Microsoft). How will they use this power? Will they be able to control it? Will they allow competition? It's disturbing to think that the fate of mankind might lie in the hands of Sam Altman.


It's funny how people care about who will become the next US president, completely oblivious of the civilization redefining events taking place right in front of their eyes.




Read more...Collapse )
Saturday, December 23rd, 2023
1:14 am
Jobs

An AGI-level model will appear within the next 3 years. Most likely it will be GPT-5 or GPT-6. It's possible that Google or Anthropic (or even one of the smaller players) will catch up with OpenAI by that time. It does not really matter who will develop it first.


I've been thinking about what happens next. I define AGI-level intelligence as capability to perform any task that a human can, with a possible exception for tasks requiring genius-level intellect. As soon as such a model become available, there will be widespread and rapid automation of most jobs. It is painfully obvious to me that no one — workers, employers, or governments - will be ready for this imminent and massive social disruption. 


Read more...Collapse )
Monday, November 27th, 2023
1:18 am
GPT-4+

There have been significant technical developments recently at OpenAI: voice interface, visual interface, and internet search have been added to GPT-4 in the last 30 days. In addition, the maximum text input length has been expanded to an equivalent of hundreds of pages. 


I can now talk to the model in real time, it understands me and my thick accent without any difficulties, and it responds in a natural human voice. I asked it to tell a story to my kids at bed time, and it did an excellent job — much better than I ever could. 


I can now show it an image, ask complex questions about what is pictured, and ask it to generate images by describing them in plain English. I showed it a screenshot of a page of dense mathematical equations, describing an idea for an algorithm, and it correctly parsed everything, explained the algorithm, and translated it into working code, following my instructions precisely. I also tried generating an image based on a very specific textual description, and it did a decent job — not perfect, and not on the first try, but impressive nevertheless. 


Just 3 weeks ago I could not paste a long piece of code as input into the ChatGPT window. It could only accept a couple of thousand words. I had to spend 20 minutes simplifying the code I wanted it to analyze. Last week I pasted 10x as much code without any issues.


Read more...Collapse )
Tuesday, April 4th, 2023
8:12 pm

GPT-4


I've been talking to a computer for the last two hours, asking it complicated questions, and learning new things, even in my area of expertise (training neural networks). The improvement from GPT-3.5 is significant. I've seen no factual errors, logical inconsistencies, nor any other indications that I'm not talking to a smart and knowledgeable human being. My conversation with it was like meeting with a friendly PhD adviser to discuss an idea for our next paper. 


I believe we have reached the escape velocity with these models — if the current rate of progress continues (and nothing so far indicates it will not, listen to Ilya Sutskever) — we will have GPT-5 in 1-2 years, and there's a high chance it will be smart enough to take over many jobs. New jobs will appear, mainly managers of such capable models, but things will be happening fast, so I'm not sure how the society will react. GPT-5 will most likely be smart enough to suggest good ideas on how to improve itself, leading to creation of GPT-6, and at that point (2-4 years from now) I honestly don't know what to expect.


The current situation reminds me of the first nuclear bomb test, where some physicists feared it would set the atmosphere on fire, destroying the whole planet. 

Friday, July 17th, 2020
9:12 pm
Success

“The secret of success in life is for a man to be ready for his opportunity when it comes.”


- Benjamin Disraeli               

Thursday, February 8th, 2018
9:00 am
Life Hacks

I did a few lifestyle adjustments recently:

1. I installed a website blocker on my work computer. I used to spend anywhere from one to several hours a day browsing random stuff, and communicating with strangers. Now I do that only briefly in the evening when I get home.

2. I bought an Apple watch with cellular. I used to carry my iPhone with me everywhere, and check it constantly. No more. Now it stays at home, and I barely touch it. Other than phone calls/texts, the only other feature I use on my watch is exercise tracker. It makes me get up and move every hour, when I'm sitting still, and provides extra motivation to exercise by showing steps taken, flight stairs climbed, calories burned, and total time I moved, per day. Overall, money well spent.

3. I'm on Lebedev's diet. My weight was going up recently, mostly because of overeating. Now I try to limit not only what I eat, but how much I eat.

4. I replaced biking with walking/running. It takes more time and effort, but it's worth it.

5. I incorporated high intensity exercise into my workouts. Now I try to run every day, and usually I run 2 miles as fast as I can. Also, I started using rowing machine, and try to do at least 5 minutes on it. My best result so far is 150 calories burned in 12 minutes. Also, I started doing "burpees" (pushups followed by high jumps). 

Tuesday, November 14th, 2017
12:24 pm
WOTD

shrinking violet

Monday, May 15th, 2017
10:29 am
Internship
I'm going to Luxembourg for the summer! Will be working for a startup, creating music using machine learning. Here's an example.

Choosing an internship was a hard decision: I got two offers, one local (Malibu), where they offered me a lot of money. Chose Aiva for several reasons (got a better feeling about team members during interview, more excited about what they do, and always wanted to experience living in Europe).

I will bring my family with me (Katya and Yana), for 6 weeks, then I will fly back with them for a week, and return to Luxembourg for 5 more weeks. I'm very excited!
Sunday, September 18th, 2016
5:41 pm
Consciousness
Consciousness is how the process of backpropagation feels to a sufficiently complex neural network
5:05 pm
Existence of God
Basilides, one of the most intriguing figures of early Gnosticism, believed that the highest attribute of divinity is its inexistence. By his own account, Basilides was a theologian of the “nonexistent God”; he referred to God as “he who is not,” as opposed to the maker of the world, trapped in existence and time.

Source: http://nytimes.com/2016/09/18/opinion/why-do-anything.html
Thursday, August 25th, 2016
4:06 pm
Wireless AI
Note to AI experimenters:

A sufficiently intelligent AI program will be able to generate and receive wireless (e.g. WiFi) signals even without any dedicated antenna. Moreover, putting it in a Faraday cage might not help (to see why, put your cellphone in a microwave, close the door, then call it).
Wednesday, August 10th, 2016
8:02 pm
ESL
"Клин клином вышибают" - "Fight fire with fire"
Saturday, May 28th, 2016
4:26 pm
VR sensing, UBI, and Ethereum
For a while now, I've been wanting to write about several interesting topics, but I'm not sure when I will get a chance. So I'm just going to list them here as placeholders:

1. When creating virtual worlds, how would you implement a sensory perception for artificial beings? In the real world, for example, our eyes capture photons bounced off surrounding objects, our ears react to air pressure fluctuations, and our noses catch tiny particles of stuff, floating everywhere. If you want to populate your own personal artificial world with virtual humans, which senses should they employ to perceive each other? Emulating individual photons seems awfully computationally costly. What about ray tracing? Instead of the passive capture, actively scan your surroundings with a laser-like ray, which should get modulated by the encountered object texture to provide information about the nature of that object. Regarding hearing - I'm not sure if we would simulate air in the virtual world, so when someone wants to "make noise", implement it as a radiowave broadcasting. Every inhabitant would have a built in "antenna" to perceive this info.

2. Universal Basic Income (UBI). A couple of cities in the (first) world are currently trying it out, and a few more planning to. UBI is a welfare 2.0, a more direct, and hopefully more effective form of wealth sharing. The idea is simple: everyone is entitled to receive some moderate monthly financial aid, "want based", rather than "need based". This means even if you're perfectly capable of earning money, you no longer have to do so in order to survive, and will receive enough money from government to have your basic needs met. Where would the money come from? Probably taxes. Some questions arise, however, such as: how much money is enough to meet ones' basic needs? Would it depend on a geographic area (obviously one has to spend more in Manhattan than in rural Oklahoma on room and board). Should everyone get it regardless of their income, or should we set an eligibility threshold? What will be the societal impact of this program? How many people will take advantage of it? How many will still choose to work, or stay productive otherwise? As various weak AIs replace more and more humans doing menial tasks, UBI no longer looks like some form of utopia, but rather like a way to avoid civil unrest.

3. Ethereum. Using a decentralized secure database of accounts and transactions (blockchain), we can create a virtual currency, and if everyone assigns the same value to it, it becomes a real currency. That's the idea of bitcoins. If we allow more complicated transactions (e.g. defined by computer programs with conditionals), we can potentially conduct sophisticated operations in completely decentralized manner. That's the idea of "smart contracts". At least that's how I understood it after a few minutes of googling. It's still not clear to me how serious this can get, but, just like the UBI trials, it might be a sign of something big emerging...
Wednesday, March 9th, 2016
2:00 am
Brain Simulation
A neuronal activity consists largely of neurons firing, spikes propagating, and synapses forming/changing. Those things can happen either as a result of external (sensory) input coming into the brain, or feedback loops in the brain itself.

We can capture the state of the brain at any particular moment by recording all relevant parameter values. These parameters can be plugged into a functional model of the brain, together with any input signals. The model will allow us to predict (calculate) how the system is going to change, if started with those initial parameters (the real brain changes due to laws of physics, for example, if the electrical potential value in some neuron is large enough, that neuron is likely to fire; it also changes as the input signals change). The system uses analog signals, and is not governed by a global clock, so the change will be analog (gradual). There is no "next state" to speak of - the state is continuously changing. We can make "snapshots" of a real, living brain at different times, or we can calculate the state of the brain at those times. If the results are identical, we have a good model.

Calculating the state of the brain at successive points in time, given initial parameters, sensory input, and a functional model, can be considered to be an active, ongoing brain simulation. Calculating those states frequently enough can let us construct a pattern of neuronal activity, which we can then decode as specific thoughts, feelings, and motor commands intended to generate some actions. We can have a robot perform the actions, and this robot will appear alive and even "conscious". However, there's no living "being" controlling this robot. The brain state calculations could, in principle, be done on paper, because it's all just number crunching*. The calculated numbers could tell us what the real person would feel like, if this was a real person. But it's not. It's a description of a real person - a mathematical model with a bunch of parameters.

Such a robot would already be pretty impressive, but how do we create a "living being"? For that, we need to switch from performing calculations to running physical processes. We need to build a system where processes are happening "on their own". Instead of calculating the "next state", we need to let the system run so that any "next state" would develop naturally. Instead of calculating a snapshot at a particular time, we should have a system that has a continuous physical state at all times.

It's not clear how accurately we need to imitate the relevant physical processes in hardware, or if it's possible to use some software abstractions. For example, can we represent synapses as numbers stored in memory, or must they be actual physical devices, such as memristors? Do we need to generate analog voltage spikes on dedicated wires, or can we use digital data packets on a switched network between neurons?

I tend to think that as long as we recreate the movement, transformation, and storage of important information throughout the entire system, we have a living being.


*Compare with Searle's Chineese Room Experiment.
Tuesday, November 24th, 2015
9:12 am
supercomputer
When Seymour Cray heard that Steve Jobs bought a Cray supercomputer for $14M to help design the next Mac, he said: "It's funny, I just bought a Mac to help me design the next Cray".
Saturday, October 3rd, 2015
7:58 pm
Monty Hall Problem
Game rules: you're standing in front of three closed doors. Behind one of them is a prize. You tell the host which door you think is the prize door (random pick). The host knows where the prize is, and he opens one of the other two doors - the one without a prize. After that, two doors remain, one of them is the prize door. You can choose again - whether to stick with your original choice, or switch to the door not opened by the host. What should you do?

Solution:
Your initial choice is random - the probability you picked the right door is 1/3. The probability that the prize is behind the other two doors is 2/3. By opening one of those two doors, the host shifts all that probability to the one remaining door. So the door which the host did not open now has 2/3 probability of having the prize. While your original door still has only 1/3 probability. You should switch.

Note: I didn't not get this at first. I thought that when the host opens his door, we are left with two equally probable doors, and switching would not increase our chances.
Monday, July 6th, 2015
12:25 pm
real or not?
Let's say we figured out how to simulate a human brain on a computer. We have build a highly realistic virtual world, where simulated people live.

Let's compare two situations:
1. There are real people on the other side of the world, who experience a real rain (but we, on this side of the world, can't experience that rain directly).
2. There are simulated people who experience a simulated rain, which is real to them.

The simulated people have real consciousness, and are indistinguishable from real people if viewed within their virtual world. They can even control a robotic human-like bodies in the real world. Can we say the simulated people are real?
What if we regard the virtual world to be an extension of the real world, just like the "New World" (the Americas) was considered to be an extension of the Old World? In that case, the virtual rain experienced by virtual people is just as real as the rain experienced by the real people on the other side of the real world.

This reminds me: "If a tree falls down in a forest, and no one is around to hear it, does it make a sound?"
Saturday, May 2nd, 2015
9:23 pm
Intelligent Design
Some schools in US teach Intelligent Design concept instead of the theory of evolution.

Let's think about it. Let's say there are 10^14 inhabitable planets in the observable Universe. How likely is it that by chance, on some of these planets, during 10^10 years time span, a particular combination of molecules had formed, and started off the development of life?
Well, I have no clue what the probability is, but it seems plausible.

But how plausible is evolution? How likely is that once the process had started, it will continue long enough to produce something with the complexity of a neocortex? Given the thermodynamics law (amount of disorder naturally tends to increase with time), and natural disasters (meteorite impacts, ice age, etc). Likelihood of an evolution end should decrease with time, making neocortex a very unlikely event (it needs a billion years of continuous evolution!).

So, some of the planets form the life seed. Say, one in a billion. This would leave less than a million planets where the evolution process would start. How many would sustain evolution for a long enough time to develop complex life? Not many.

On the other hand, only one successful evolution is enough to trigger a chain of "Intelligent Designs". Say the first (or even the only) successful evolution in the Universe happened 10 billion years after the Big Bang. That means 4 billion years ago, they could have been advanced enough to start planting life seeds all over, and those life seeds would be specially engineered to survive, to evolve until neocortex.
Looking at evolution, sometimes I feel like it's working too well, if like someone pushed it forward, constantly making the right choices, constantly innovating.

Anyway, it's highly more likely that we are the planted life, rather than the original one.
7:31 pm
Text (mis)understanding
I just did a Google search: "how many stars are there", and Google responded with:

"""100 billions

To answer “how many stars are there,” we must limit the discussion to what we can observe. Astronomers estimate that the observable universe has more than 100 billion galaxies. Our own Milky Way is home to around 300 billion stars, but it's not representative of galaxies in general."""

How can we make a system that would answer this correctly? Combine IBM Watson, which would know that Milky Way is a galaxy, with Memory Networks QA (demo).
Tuesday, April 21st, 2015
12:36 pm
NLP update
I made some progress understanding how to do feature extraction from text:

Take a large text corpus, such as Wikipedia, or all 19th century novels.

Run clustering algorithms, such as LDA, or other topic modeling method, to identify distinct topics for that corpus. For example: music, math, history, food, furniture, emotions, etc. Pick some number of topic (100-100,000, depends on the dataset, desired abstraction level of topic, or processing power available).

Take each word in the corpus, and construct a sparse vector, where each position might contain a probability of this word appearing in a topic. For example, the word "organ" can be found in a fraction of documents about music. Or instead of probability, just use 0 or 1 to simplify things.

Account for the context differences, by creating multiple vectors per word: "organ" might have one vector representing the musical instrument, another vector for the body part, etc. When looking at specific chunk of text, we would dynamically assign proper vectors to each word, based on the presence of other words, indicating the context. Again, could use only one vector at first.

Construct a text image by arranging word vectors side by side. So if there are 1,000 topics, and we are dealing with one page long texts, we would generate 1kx1k pixel black and white image. Perhaps we could use RGB colors to represent three different contexts?

I see two options to extract features from these images: unsupervised learning, and classification. Both methods are shown to produce high level features. Classification could be easier to do at first: separate all documents (text chunks) into topics for example, for 19th century novels: "love scenes", "fights", "philosophizing", "betrayal", "travel", "description of poverty", "description of luxury", etc.
Or use unsupervised learning: identify similar semantic patterns on its own (via some form of deep clustering), like the cat faces in Youtube videos.
[ << Previous 20 ]
About LiveJournal.com
Image