Computer technology is becoming more and more common senses.

Computers are becoming more common sense.



Computer technology is becoming more and more common senses.
Computer technology is becoming more and more common senses.



A 'start-up' call metamind has developed a new machine learning algorithm that improves language processing.

Talking to a machine by telephone or chat can be exasperating. However, several research groups, including some large technology companies like Facebook and Google, are making steady progress toward improving the language skills of computers from developments in machine learning (see Google captions described with precision almost human). This is the new era of Computer technology.

The latest development in this Computer technology comes from a start-up called metamind, who has published the details of a more precise system than other techniques when answering questions on multiple lines of text that tell a story. Metamind is developing technology designed to be capable of performing a variety of different tasks of machine learning and hopes to sell it to other companies. The start-up was founded by Richard Socher, a leading expert in this field who earned her doctorate from Stanford University (USA). It assumed that this type of evaluation can be changed the computer technology. 

Metamind approach combines two forms of memory with advanced neural network fed large amounts of annotated text. The first is a kind of database concepts and demonstrable facts; the second is short-term, or "episodic". When you are asked a question, the system, which the company calls a dynamic network of memory, seek relevant patterns within the text of which you have learned; after finding associations, will use his episodic memory to return to the question and seek more more abstract patterns. This process allows you to answer questions that require cognitive connection of various information.

A related with this work, an essay published online last week, put the following example:

Statements provided to the system:

Jane went into the hall.

Mary went to the bathroom.

Sandra went into the garden.

Daniel returned to the garden.

Sandra milk she was.

Question: Where is the milk?

Answer: Garden.

By having trained with data sets covering the feeling and structure, the system can answer questions about the feeling or emotional tone of the text, in addition to basic questions about its structure. "The cool thing is that you learn from the example," says Socher. "Program episodic memory the system itself".

Metamind tested its system with a set of data released by Facebook to measure the performance of the machine in question and answer tasks. The software start-up performed better than the algorithms themselves of Facebook by a small margin.

Provide computers for greater understanding of everyday language could have important implications for companies like Facebook. It could make available to users a better way to search or filter the information, allowing applications introduced as streams phrases. It could also allow Facebook to grasp the meaning of the information that users post on their profiles and their friends. This could be a powerful way of recommending information, or to place ads next to the content in a more thoughtful way.

The work is a sign of the progress under way towards improving the language skills of the machines. Much of this work now revolves around a deep learning approach called (see Deep learning wants to revolutionize all industries), involving the introduction of large amounts of data to a system that performs a series of calculations to identify abstract characteristics within of, for example, a picture or an audio file.

"The promise is that the architecture separates the 'episodic' and modules 'semantic' memory," said Noah Smith, an assistant professor at Carnegie Mellon University who studies natural language processing. "This has been one of the shortcomings of many architectures based on artificial neural networks, and from my point of view it is great to see a step toward models that can be inspected to allow engineering enhancements."

Yoshua Bengio, a professor at the University of Montreal (Canada) and a leading figure in the field of deep learning, Socher describes the system as "a novel variant" of the methods advocated by Facebook and Google. Bengio added that work is one of many recent advances that are experimental but promising. "The potential of this type of research is very important for speech understanding and natural language interfaces, which is crucial and extremely valuable for many companies," he says.

Others are less impressed. Robert Berwick, professor of computational linguistics and computer science at MIT, says Socher method uses established techniques and offers only incremental progress. He adds that achieves little resemblance to the performance of episodic memory of the human brain, and ignores recent significant progress in linguistics.

Socher believes that his most important job is to progress towards a more generalizable artificial intelligence. "This idea of ​​adding memory components is something floating in the air right now," he says. "Many are building different types of models, but our goal is to try to find a model able to perform many different tasks."

Hoping to enjoy this article about computer technology.

Would you want to read more about 

The latest update of the English version was the first to show its new interface. Nice news about Social Media.


The best methods for Windows 10 in a second is...

Share on Google Plus

About Unknown

    Blogger Comment
    Facebook Comment

0 comments:

Post a Comment