Understanding about KERAS


Hi, everyone! I really believe to express my knowledge in pure and simple words!Today we are going to understand about one of the leading DEEP LEARNING library KERAS. We have already learnt about the deep learning concept, machine learning as well, if you did not read, read it first!


Introduction:
In our daily life, library is greatest way to express knowledge as a silence place, when we see it, we observe that it is source of information, collection of book and in a book a lot of information bound. But do you know what is library in computer programming language domain? It is the collection of classes every class has unique features and according to the need we use it. For example Math class library in C# give us a lot of math function without any code.
In python there are many useful library in deep learning concept but Keras in one of them which is one of the most popular library. So, in this Article we will discuss about KERAS!
What is Keras?

Keras is a Python library for deep learning that can run on top of Theanoor TensorFlow. Another simple definition, Keras is an open source neural network library which is written in Python.
For Deep learning concept we use Keras (Python Library) which run on Theano or TensorFlow. We read and understand the concept of deep learning in DEEP LEARNING post, in term of work this library play a vital role. This was made for fast and easy research and development in deep learning concept. It plays on PYTHON 2 or 3, can seamlessly execute on GPUS and CPUs.
Keras contains numerous implementations of commonly used neural network building blocks such as layers, objectives, activation functions, optimizers, and a host of tools to make working with image and text data easier. The code is hosted on GitHub, and community support forums include the GitHub issues page, and a Slack channel.
Several people thing about the concept of Keras is reserved for PYTHON only, but it’s allow user to productive deep models on smartphones (iOS and Android) also, and on we also on Java Virtual Machine.

François Chollet developed Keras, now let's understand the four principles of guiding of Keras.

Principles



Modularity: A model can be understood as a sequence or a graph alone. 
All the concerns of a deep learning model are discrete components that can 
be combined in arbitrary ways.
Minimalism: The library provides just enough to achieve an outcome, no frills 
and maximizing readability.
Extensibility: New components are intentionally easy to add and use within
 the framework, intended for researchers to trial and explore new ideas.
Python: No separate model files with custom file formats. Everything is 
native Python.


Resistance:
According to the latest report 200,000 user on November 2017 move towards Keras, and it was the 10th most citied tool in the KD Nuggets in 2018 software poll and registered a 22% usage.
It also allows use of distributed training of deep learning models on clusters of Graphics Processing Units (GPU).

User Experience.
Large adoption in the industry and research community.
Multi-backend, multi-platform.
Easy productization of models.



How to Install Keras

It is simple and straightforward to install, it is good if you did any work on PYTHON and SciPy before. In your machine setup the THEANO or TENSORFLOW  should be already installed. 

In this installation step we cover both platforms THEANO and TENSORFLOW.
By using PyPi th installation process are very easy;

sudo pip install keras
Recent Version of keras 1.0.0 at the time of writing you wil get, on command line you can also check the version of keras by using command. 
python -c "import keras; print keras -- version --"

the result of above command will be
1.0.0

So, can we upgrade the installation of keras, YES you can by using command 
sudo pip install -upgrade keras



In the upcoming Article we will learn KERAS by using in practical
 project using DISEASE IDENTIFICATION.
 For more update keep in touch!!!

Understanding about KERAS


Hi, everyone! I really believe to express my knowledge in pure and simple words!Today we are going to understand about one of the leading DEEP LEARNING library KERAS. We have already learnt about the deep learning concept, machine learning as well, if you did not read, read it first!


Introduction:
In our daily life, library is greatest way to express knowledge as a silence place, when we see it, we observe that it is source of information, collection of book and in a book a lot of information bound. But do you know what is library in computer programming language domain? It is the collection of classes every class has unique features and according to the need we use it. For example Math class library in C# give us a lot of math function without any code.
In python there are many useful library in deep learning concept but Keras in one of them which is one of the most popular library. So, in this Article we will discuss about KERAS!
What is Keras?

Keras is a Python library for deep learning that can run on top of Theano or TensorFlow. Another simple definition, Keras is an open source neural network library which is written in Python.
For Deep learning concept we use Keras (Python Library) which run on Theano or TensorFlow. We read and understand the concept of deep learning in DEEP LEARNING post, in term of work this library play a vital role. This was made for fast and easy research and development in deep learning concept. It plays on PYTHON 2 or 3, can seamlessly execute on GPUS and CPUs.
Keras contains numerous implementations of commonly used neural network building blocks such as layers, objectives, activation functions, optimizers, and a host of tools to make working with image and text data easier. The code is hosted on GitHub, and community support forums include the GitHub issues page, and a Slack channel.
Several people thing about the concept of Keras is reserved for PYTHON only, but it’s allow user to productive deep models on smartphones (iOS and Android) also, and on we also on Java Virtual Machine.

François Chollet developed Keras, now let's understand the four principles of guiding of Keras.

Principles



Modularity: A model can be understood as a sequence or a graph alone. 
All the concerns of a deep learning model are discrete components that can 
be combined in arbitrary ways.
Minimalism: The library provides just enough to achieve an outcome, no frills 
and maximizing readability.
Extensibility: New components are intentionally easy to add and use within
 the framework, intended for researchers to trial and explore new ideas.
Python: No separate model files with custom file formats. Everything is 
native Python.


Resistance:
According to the latest report 200,000 user on November 2017 move towards Keras, and it was the 10th most citied tool in the KD Nuggets in 2018 software poll and registered a 22% usage.
It also allows use of distributed training of deep learning models on clusters of Graphics Processing Units (GPU).

User Experience.
Large adoption in the industry and research community.
Multi-backend, multi-platform.
Easy productization of models.



How to Install Keras

It is simple and straightforward to install, it is good if you did any work on PYTHON and SciPy before. In your machine setup the THEANO or TENSORFLOW  should be already installed. 

In this installation step we cover both platforms THEANO and TENSORFLOW.
By using PyPi th installation process are very easy;

sudo pip install keras
Recent Version of keras 1.0.0 at the time of writing you wil get, on command line you can also check the version of keras by using command. 
python -c "import keras; print keras -- version --"

the result of above command will be
1.0.0

So, can we upgrade the installation of keras, YES you can by using command 
sudo pip install -upgrade keras



In the upcoming Article we will learn KERAS by using in practical
 project using DISEASE IDENTIFICATION.
 For more update keep in touch!!!

Introduction to Deep Learning

Hi, everyone! 
I believe to express my experience and knowledge without any 
boundary and specially those who has less understanding about 
the Artificial Intelligence, Machine Learning and Deep Learning 
that's why I don’t prefer to write advance vocabulary. After Reading
 this Article you can able to understand what is Deep learning
 and Neuron and it structure If you get to know about the basic 
definition and understating of Machine learning read
[Article writer]


Introduction to Deep Learning:
Deep learning in the subset of Machine Learning, deals with the algorithm which is inspired by the structure and function of the brain (Neuron). In simple words we can say that, it is a technique to teach computer to do what? As like human naturally: learning by example. Computer model in deep learning learn to do classification tasks directly from resources to from it has been designed i.e. images, games etc. It has the state-of-art accuracy also models are trained by using a large set of labeled data and neural network architectures that contain many layers. So, what is Neuron Network and it structure and how can we understand the behavior of neuron in our machine.

Neuron is what?
  Consider the biologically neuron, in which The primary components of the neuron are the soma (cell body), the axon (a long slender projection that conducts electrical impulses away from the cell body), dendrites (tree-like structures that receive messages from other neurons), and synapses (specialized junctions between neurons). The main component of neuron structure are: Dendrite, Cell Nucleus,  Axon, Synapse, where Dendrite receive messages from other cells, Cell Nucleus control the activity of the cells, Axon passing messages away from the cell body to other neuron and Synapse, Dendrites create one of the most well-known structures in the brain: the synapse. This is the site of interaction between the neuron and the target cell. Synapses can be located in several places and are classified based on their location:
Axospinous – present on the dendritic spine
Axodendritic – present on the dendrite itself
Axosomatic - present on the soma (cell body)
Axoaxonic – present on the axon, or tail
The working of Neuron in our brain:
Get the signals of information
Meshing the incoming signals to identify whether or not the signal should be passed along.
Target the cells through communicate signals (other Neurons)

What is Neuron Network?
       After understand about the biological neuron, in Computer world neural network works as like Neuron working models, it consist on the different layer to identity the object in images, texts or different application. Generally the layer consists on 3 basic layers INPUT, HIDDEN and OUTPUT.

Input Layer (it receives the all the input)

Hidden Layer (between input the output layers, its transform the input layer in that format, which output layer use it)

Output Layer (through two layer output layer easily identify the input) 


Biological Neuron VS Artificial Neuron:
    In Artificial Neuron the main components are INPUT, NODES, WEIGHTS and OUTPUT.
Let's understand by the diagram

                   
      We got understand about the work of Dendrite, Cell Nucleus,  Axon, Synapse  in human Neuron, now The working of neuron in humans brain we implements that working step into machine to make predication and working as like human using feature extraction. To make understand the four part of Neuron implement in Machine using deep learning to make your machine smarter.

Why Deep Learning:
        Day by Day we are going ahead in the field of technology and big data, that's why often time we need advance algorithm to survive. Many software industry moves towards AI field to make their work system more intelligent. It becomes more necessarily according to the demand. To secure the word in term of security, copyright issue and hacking we need systematic machine algorithm as per requirement we need to do work on AI and its subset field.


Also we need Deep Learning due to it is complex to extract the features from images, to perform complex algorithm (as the amount of data increase), process of huge amount of data and achieve the best performance with large amount of data and many more reason. 
That's why the graph of usage is improving day by day.

Features Extraction in Deep Learning:
      In Deep learning we don't need to provide extract features manually from the image. While training it get learn, we need just feed (pixel value on it). Features Extraction play a vital role to identity the output after accepting the input from the user For example in this concept our machine identify the picture of dog using among the different animal pictures by using facial features we already got understand it, but more feature extraction we get the pixel of that images, then show the color scale in graph which identity the color range of the image also in second approach we can use RGB color and find the AVERAGE usage of these color then save into Database for future comparison. 

                     


    To solve these stages two problems will be faced namely DIMENSIONAL CURSE: Each of image having large number f dimension or features 256 colors, try to reduce the number of feature
CROSS TALK: Means Query image RED COLOR not only compare to RED of any other image of data base also others color of it, RED TO RED, RED TO PINK, RED TO ORANGE etc…
 This Features Extraction helps to identify the algorithm
to predict more near to the right output.

Example of Deep Learning:

 These are few well known example of Deep Learning Application.
1.     Identify the disease
2.     Got understand the level of cancer disease (level)
3.     Autonomous driving car
4.     Music Composition
5.     Colorization the black-and-white image into color full image
6.     Object dedication in the image
7.     Dream reader

           and many more…

Introduction to Deep Learning

Hi, everyone! 
I believe to express my experience and knowledge without any 
boundary and specially those who has less understanding about 
the Artificial Intelligence, Machine Learning and Deep Learning 
that's why I don’t prefer to write advance vocabulary. After Reading
 this Article you can able to understand what is Deep learning
 and Neuron and it structure If you get to know about the basic 
definition and understating of Machine learning read
[Article writer]


Introduction to Deep Learning:
Deep learning in the subset of Machine Learning, deals with the algorithm which is inspired by the structure and function of the brain (Neuron). In simple words we can say that, it is a technique to teach computer to do what? As like human naturally: learning by example. Computer model in deep learning learn to do classification tasks directly from resources to from it has been designed i.e. images, games etc. It has the state-of-art accuracy also models are trained by using a large set of labeled data and neural network architectures that contain many layers. So, what is Neuron Network and it structure and how can we understand the behavior of neuron in our machine.

Neuron is what?
  Consider the biologically neuron, in which The primary components of the neuron are the soma (cell body), the axon (a long slender projection that conducts electrical impulses away from the cell body), dendrites (tree-like structures that receive messages from other neurons), and synapses (specialized junctions between neurons). The main component of neuron structure are: Dendrite, Cell Nucleus,  Axon, Synapse, where Dendrite receive messages from other cells, Cell Nucleus control the activity of the cells, Axon passing messages away from the cell body to other neuron and Synapse, Dendrites create one of the most well-known structures in the brain: the synapse. This is the site of interaction between the neuron and the target cell. Synapses can be located in several places and are classified based on their location:
Axospinous – present on the dendritic spine
Axodendritic – present on the dendrite itself
Axosomatic - present on the soma (cell body)
Axoaxonic – present on the axon, or tail
The working of Neuron in our brain:
Get the signals of information
Meshing the incoming signals to identify whether or not the signal should be passed along.
Target the cells through communicate signals (other Neurons)

What is Neuron Network?
       After understand about the biological neuron, in Computer world neural network works as like Neuron working models, it consist on the different layer to identity the object in images, texts or different application. Generally the layer consists on 3 basic layers INPUT, HIDDEN and OUTPUT.

Input Layer (it receives the all the input)

Hidden Layer (between input the output layers, its transform the input layer in that format, which output layer use it)

Output Layer (through two layer output layer easily identify the input) 


Biological Neuron VS Artificial Neuron:
    In Artificial Neuron the main components are INPUT, NODES, WEIGHTS and OUTPUT.
Let's understand by the diagram

                   
      We got understand about the work of Dendrite, Cell Nucleus,  Axon, Synapse  in human Neuron, now The working of neuron in humans brain we implements that working step into machine to make predication and working as like human using feature extraction. To make understand the four part of Neuron implement in Machine using deep learning to make your machine smarter.

Why Deep Learning:
        Day by Day we are going ahead in the field of technology and big data, that's why often time we need advance algorithm to survive. Many software industry moves towards AI field to make their work system more intelligent. It becomes more necessarily according to the demand. To secure the word in term of security, copyright issue and hacking we need systematic machine algorithm as per requirement we need to do work on AI and its subset field.


Also we need Deep Learning due to it is complex to extract the features from images, to perform complex algorithm (as the amount of data increase), process of huge amount of data and achieve the best performance with large amount of data and many more reason. 
That's why the graph of usage is improving day by day.

Features Extraction in Deep Learning:
      In Deep learning we don't need to provide extract features manually from the image. While training it get learn, we need just feed (pixel value on it). Features Extraction play a vital role to identity the output after accepting the input from the user For example in this concept our machine identify the picture of dog using among the different animal pictures by using facial features we already got understand it, but more feature extraction we get the pixel of that images, then show the color scale in graph which identity the color range of the image also in second approach we can use RGB color and find the AVERAGE usage of these color then save into Database for future comparison. 

                     


    To solve these stages two problems will be faced namely DIMENSIONAL CURSE: Each of image having large number f dimension or features 256 colors, try to reduce the number of feature
CROSS TALK: Means Query image RED COLOR not only compare to RED of any other image of data base also others color of it, RED TO RED, RED TO PINK, RED TO ORANGE etc…
 This Features Extraction helps to identify the algorithm
to predict more near to the right output.

Example of Deep Learning:

 These are few well known example of Deep Learning Application.
1.     Identify the disease
2.     Got understand the level of cancer disease (level)
3.     Autonomous driving car
4.     Music Composition
5.     Colorization the black-and-white image into color full image
6.     Object dedication in the image
7.     Dream reader

           and many more…

Introduction to Machine Learning


I believe to express my experience and knowledge without any 
boundary and specially those who has less understanding about 
the Artificial Intelligence, Machine Learning and Deep Learning 
that's why I don’t prefer to write advance vocabulary. After Reading
 this Article you can able to understand what is Machine learning
 and its types and commonly use algorithm on it. If you get to know 
about the basic definition and understating of Machine learning read
[Article writer]



Machine Learning Introduction: 
           There is a variety of Learning, Learning Machine Learning - If we talk to AI, initially with its algorithm to solve any problem in the computer I used to give it - It is called Symbolic AI- in modern AI, we only give examples to computer- computer itself learns from these examples or data. for example, if we have to make a difference in the cat and dog, The pictures will show to the computer and show similar photos of the dog - computer itself will learn to make the difference between cats and dogs these methods are called machine learning. This is the modern algorithm of the AI's current work on this principle.

In machine learning, we divide our data into three parts.

  Training Data:
        In machine learning first, we teach computers through examples, this is called
training - we tell both computers and the answer to the computer for example,
we give the computer a picture of a dog, and it also tells us that this dog The
image is - If the computer answers the wrong (for example, he tells him the cat)
then re-adjusts himself so that the next time he does not make mistakes, the
process is called learning - for which the algorithm was the most used Is it
called Gradient Descent-like this, our model gets train.
 

Gradient descent is a first-order iterative optimization algorithm
for finding the minimum of a function. To find a local minimum of a function
using gradient descent, one takes steps proportional to the negative of
the 
gradient (or approximate gradient) of the function at the current point.


[Wikipedia]



      Validation Data:
         
         Once we have a test, then we test the capability of our model. This is the validation data used for this purpose. If our model is not doing the right thing, then we change our model hyper parameters let’s do this, this process continues even if the model's accuracy is not very good.
    Testing Data:
    When our training is completed, we last check our model on test data- The purpose of isolating this data is that the validation data is exposed during Hyper-parameter's collection (or training) - any of us however, the data needs to be checked on the data.
Neural Network: 
          The Neural Network is designed to be influenced by the human mind - it has neuron's basic condition - a neuron takes some inputs and multiply it with weights and makes linear transformation application- as seen in below diagram. 


The Neural network consists of such neurons, a neuron is connected to its next neurons and gives it data. A neuron also appeals an Activation function before giving data to its next neurons. The Neural Network can model any kind of non-linear data.


A Neural network usually has an Input Layer, Hidden Layers, and an Output Layer. (Which is deeply disease in the Difference among Artificial Intelligence, Machine Learning and Deep Learning Article)  In Deep Learning, we increase the number of layers that represents the model Complex data another important thing is that features do not exclude in Deep Learning - Model removes itself through the transformations of different layers.


Types of Machine Learning:
Now, let’s understand the types of machine leaning with basic view. 

Supervised Learning:
        In this way, we provide data and it’s labeling during training, adjusting itself when looking at computer labels - as a computer has an answer. In simple word we can say that, it is consist on dependent variable which predict from the predictor set of data we can also say it independent variable. By means of variable’s set, we get out desired output, generating a function that map inputs. Example of Supervised Learning is Regression, Decision TreeRandom Forest, KNN, Logistic Regression etc.

Unsupervised Learning:
        In this method we do not provide feedback or target the actual purpose is to transfer data or better understanding the data. In clustering that is a kind of un-supervised learning, we divide the data into different groups. Data is the same as it is in a group Dimensionally reduction is another type in which we reduce the size of the data.  It is used for clustering population in different groups, which is widely used for segmenting customers in different groups for specific intervention. Examples of Unsupervised Learning: Apriori algorithm, K-means.

Reinforcement Learning:
       In this method learns from computer feeds of the environment - for instance, if a vehicle is operating automatically, it can adjust itself to the other vehicles and road conditioning on the road similarly used in games etc.

Example of Reinforcement learning Markov Decision Process.
Machine Learning Algorithm:
Commonly used Machine learning Algorithm which can be applied any kind of data problem (almost). 

  • SVM
  • K-Means
  • Random Forest
  • KNN
  • Logistic Regression
  • Decision Tree
  • Naive Bayes
  • Linear Regression
  • Dimensionality Reduction Algorithms
  • Gradient Boosting algorithms
                    GBM
                   XGBoost
                   LightGBM
                   CatBoost