Neural Networks for Large Scale Machine Learning

Asim Roy (Inventor)

Research output: Patent

Abstract

The Internet of Things (IoT) is expected to contain over 26 billion devices (excluding PCs, tablets, and smartphones) by 2020 and reach a market size in excess of $14 trillion by 2025. These devices include things such as sensor-based medical devices, automobiles, manufacturing plants, power systems, and smart homes. In many IoT applications, a system needs to be in place to analyze patterns in the streaming data, detect certain types of events (e.g. impending failure or deteriorating performance) and take appropriate action. Machine learning systems would perform these tasks and thus become a critical element of a wide range of IoT applications. Neural networks are well positioned to address these challenges of large scale machine learning. Unfortunately, high-dimensional data is a problem for most machine learning methods. Researchers at Arizona State University have invented a new neural networking method. This method can be parallelized at different levels of granularity, to ensure speed. The technology addresses the issue of high-dimensional data through class-based feature selection which allows the method to automatically perform dimension reduction for high-dimensional data. The method is able to automatically determine the important variables of a high-dimensional pattern classification problem and create pattern classifiers using a small set of these important variables. The method can learn from both streaming and stored data. Hardware implementation allows for a plethora of advantages over current technologies, such as localized learning and distributed decision-making. Potential Applications Internet of Things Parallel Computations Data storage/mining Machine learning Neural Networking Robotics Benefits and Advantages Adaptable The method can learn from both stored and streaming data. Able to exploit massively parallel computing hardware and able to deal with high-dimensional data. Highly scalable, can deal with terabytes of data without resorting to sampling techniques. Speed Able to be parallelized on cluster computing platforms. Low Cost Reduces the volume of network traffic, lowering costs. Hardware Implementation Localized learning and response reduces the volume of signal transmission through expensive networks. Reduces the reliance on a single control center for decision-making, allowing for distributed control of machinery and equipment. Makes learning machines widely deployable on an anytime, anywhere basis even when there is no access to a network and/or cloud facility. Makes machine learning ubiquitous. Download Original PDF For more information about the inventor(s) and their research, please see Dr. Asim Roy's directory webpage
Original languageEnglish (US)
StatePublished - May 8 2015

Fingerprint

Learning systems
Neural networks
Hardware
Decision making
Cluster computing
Smartphones
Parallel processing systems
Automobiles
Pattern recognition
Machinery
Feature extraction
Costs
Power plants
Robotics
Classifiers
Sampling
Data storage equipment
Internet of things
Sensors

Cite this

@misc{6bf71a1eb8724349ac20ba19ab39a2f4,
title = "Neural Networks for Large Scale Machine Learning",
abstract = "The Internet of Things (IoT) is expected to contain over 26 billion devices (excluding PCs, tablets, and smartphones) by 2020 and reach a market size in excess of $14 trillion by 2025. These devices include things such as sensor-based medical devices, automobiles, manufacturing plants, power systems, and smart homes. In many IoT applications, a system needs to be in place to analyze patterns in the streaming data, detect certain types of events (e.g. impending failure or deteriorating performance) and take appropriate action. Machine learning systems would perform these tasks and thus become a critical element of a wide range of IoT applications. Neural networks are well positioned to address these challenges of large scale machine learning. Unfortunately, high-dimensional data is a problem for most machine learning methods. Researchers at Arizona State University have invented a new neural networking method. This method can be parallelized at different levels of granularity, to ensure speed. The technology addresses the issue of high-dimensional data through class-based feature selection which allows the method to automatically perform dimension reduction for high-dimensional data. The method is able to automatically determine the important variables of a high-dimensional pattern classification problem and create pattern classifiers using a small set of these important variables. The method can learn from both streaming and stored data. Hardware implementation allows for a plethora of advantages over current technologies, such as localized learning and distributed decision-making. Potential Applications Internet of Things Parallel Computations Data storage/mining Machine learning Neural Networking Robotics Benefits and Advantages Adaptable The method can learn from both stored and streaming data. Able to exploit massively parallel computing hardware and able to deal with high-dimensional data. Highly scalable, can deal with terabytes of data without resorting to sampling techniques. Speed Able to be parallelized on cluster computing platforms. Low Cost Reduces the volume of network traffic, lowering costs. Hardware Implementation Localized learning and response reduces the volume of signal transmission through expensive networks. Reduces the reliance on a single control center for decision-making, allowing for distributed control of machinery and equipment. Makes learning machines widely deployable on an anytime, anywhere basis even when there is no access to a network and/or cloud facility. Makes machine learning ubiquitous. Download Original PDF For more information about the inventor(s) and their research, please see Dr. Asim Roy's directory webpage",
author = "Asim Roy",
year = "2015",
month = "5",
day = "8",
language = "English (US)",
type = "Patent",

}

TY - PAT

T1 - Neural Networks for Large Scale Machine Learning

AU - Roy, Asim

PY - 2015/5/8

Y1 - 2015/5/8

N2 - The Internet of Things (IoT) is expected to contain over 26 billion devices (excluding PCs, tablets, and smartphones) by 2020 and reach a market size in excess of $14 trillion by 2025. These devices include things such as sensor-based medical devices, automobiles, manufacturing plants, power systems, and smart homes. In many IoT applications, a system needs to be in place to analyze patterns in the streaming data, detect certain types of events (e.g. impending failure or deteriorating performance) and take appropriate action. Machine learning systems would perform these tasks and thus become a critical element of a wide range of IoT applications. Neural networks are well positioned to address these challenges of large scale machine learning. Unfortunately, high-dimensional data is a problem for most machine learning methods. Researchers at Arizona State University have invented a new neural networking method. This method can be parallelized at different levels of granularity, to ensure speed. The technology addresses the issue of high-dimensional data through class-based feature selection which allows the method to automatically perform dimension reduction for high-dimensional data. The method is able to automatically determine the important variables of a high-dimensional pattern classification problem and create pattern classifiers using a small set of these important variables. The method can learn from both streaming and stored data. Hardware implementation allows for a plethora of advantages over current technologies, such as localized learning and distributed decision-making. Potential Applications Internet of Things Parallel Computations Data storage/mining Machine learning Neural Networking Robotics Benefits and Advantages Adaptable The method can learn from both stored and streaming data. Able to exploit massively parallel computing hardware and able to deal with high-dimensional data. Highly scalable, can deal with terabytes of data without resorting to sampling techniques. Speed Able to be parallelized on cluster computing platforms. Low Cost Reduces the volume of network traffic, lowering costs. Hardware Implementation Localized learning and response reduces the volume of signal transmission through expensive networks. Reduces the reliance on a single control center for decision-making, allowing for distributed control of machinery and equipment. Makes learning machines widely deployable on an anytime, anywhere basis even when there is no access to a network and/or cloud facility. Makes machine learning ubiquitous. Download Original PDF For more information about the inventor(s) and their research, please see Dr. Asim Roy's directory webpage

AB - The Internet of Things (IoT) is expected to contain over 26 billion devices (excluding PCs, tablets, and smartphones) by 2020 and reach a market size in excess of $14 trillion by 2025. These devices include things such as sensor-based medical devices, automobiles, manufacturing plants, power systems, and smart homes. In many IoT applications, a system needs to be in place to analyze patterns in the streaming data, detect certain types of events (e.g. impending failure or deteriorating performance) and take appropriate action. Machine learning systems would perform these tasks and thus become a critical element of a wide range of IoT applications. Neural networks are well positioned to address these challenges of large scale machine learning. Unfortunately, high-dimensional data is a problem for most machine learning methods. Researchers at Arizona State University have invented a new neural networking method. This method can be parallelized at different levels of granularity, to ensure speed. The technology addresses the issue of high-dimensional data through class-based feature selection which allows the method to automatically perform dimension reduction for high-dimensional data. The method is able to automatically determine the important variables of a high-dimensional pattern classification problem and create pattern classifiers using a small set of these important variables. The method can learn from both streaming and stored data. Hardware implementation allows for a plethora of advantages over current technologies, such as localized learning and distributed decision-making. Potential Applications Internet of Things Parallel Computations Data storage/mining Machine learning Neural Networking Robotics Benefits and Advantages Adaptable The method can learn from both stored and streaming data. Able to exploit massively parallel computing hardware and able to deal with high-dimensional data. Highly scalable, can deal with terabytes of data without resorting to sampling techniques. Speed Able to be parallelized on cluster computing platforms. Low Cost Reduces the volume of network traffic, lowering costs. Hardware Implementation Localized learning and response reduces the volume of signal transmission through expensive networks. Reduces the reliance on a single control center for decision-making, allowing for distributed control of machinery and equipment. Makes learning machines widely deployable on an anytime, anywhere basis even when there is no access to a network and/or cloud facility. Makes machine learning ubiquitous. Download Original PDF For more information about the inventor(s) and their research, please see Dr. Asim Roy's directory webpage

M3 - Patent

ER -