DNN framework Interview Questions for freshers/DNN framework Interview Questions and Answers for Freshers & Experienced

Explain the importance of LSTM.

Long short-term memory(LSTM) is an artificial recurrent neural network architecture used in the field of deep learning. Unlike standard feedforward neural networks, LSTM has feedback connections that make it a “general purpose computer”. It can not only process single data points, but also entire sequences of data.

They are a special kind of Recurrent Neural Networks which are capable of learning long-term dependencies.

Posted Date:- 2022-02-15 12:13:11

What is Exploding Gradient Descent?

Exploding gradients are a problem when large error gradients accumulate and result in very large updates to neural network model weights during training.

Gradient Descent process works best when these updates are small and controlled. When the magnitudes of the gradients accumulate, an unstable network is likely to occur, which can cause poor prediction of results or even a model that reports nothing useful what so ever.

Posted Date:- 2022-02-15 12:12:16

What is Vanishing Gradient? And how is this harmful?

When we do Back-propagation, the gradients tend to get smaller and smaller as we keep on moving backward in the Network. This means that the neurons in the Earlier layers learn very slowly as compared to the neurons in the later layers in the Hierarchy.

Earlier layers in the Network are important because they are responsible to learn and detecting the simple patterns and are actually the building blocks of our Network.

Posted Date:- 2022-02-15 12:11:53

What are some issues faced while training an RNN?

Recurrent Neural Networks use backpropagation algorithm for training, but it is applied for every timestamp. It is commonly known as Back-propagation Through Time (BTT).

There are some issues with Back-propagation such as:

<> Vanishing Gradient
<> Exploding Gradient

Posted Date:- 2022-02-15 12:11:32

What is an RNN?

Recurrent Networks are a type of artificial neural network designed to recognize patterns in sequences of data, such as text, genomes, handwriting, the spoken word, numerical times series data. Recurrent Neural Networks use backpropagation algorithm for training Because of their internal memory, RNN’s are able to remember important things about the input they received, which enables them to be very precise in predicting what’s coming next.

Posted Date:- 2022-02-15 12:10:58

Explain the different Layers of CNN.

There are four layered concepts we should understand in Convolutional Neural Networks:

Convolution: The convolution layer comprises of a set of independent filters. All these filters are initialized randomly and become our parameters which will be learned by the network subsequently.

ReLu: This layer is used with the convolutional layer.

Posted Date:- 2022-02-15 12:10:27

What is a CNN?

Convolutional neural network (CNN, or ConvNet) is a class of deep neural networks, most commonly applied to analyzing visual imagery. Unlike neural networks, where the input is a vector, here the input is a multi-channeled image. CNNs use a variation of multilayer perceptrons designed to require minimal preprocessing.

Posted Date:- 2022-02-15 12:09:36

What is Computational Graph?

A computational graph is a series of TensorFlow operations arranged as nodes in the graph. Each node takes zero or more tensors as input and produces a tensor as output.
Basically, one can think of a Computational Graph as an alternative way of conceptualizing mathematical calculations that takes place in a TensorFlow program. The operations assigned to different nodes of a Computational Graph can be performed in parallel, thus, providing better performance in terms of computations.



Posted Date:- 2022-02-15 12:08:47

List a few advantages of TensorFlow?

<> It has platform flexibility
<> It is easily trainable on CPU as well as GPU for distributed computing.
<> TensorFlow has auto differentiation capabilities
<> It has advanced support for threads, asynchronous computation, and queue es.
<> It is a customizable and open source.

Posted Date:- 2022-02-15 12:08:30

What are Tensors?

Tensors are nothing but a de facto for representing the data in deep learning. They are just multidimensional arrays, that allows you to represent data having higher dimensions. In general, Deep Learning you deal with high dimensional data sets where dimensions refer to different features present in the data set.

Posted Date:- 2022-02-15 12:07:46

Name a few deep learning frameworks

<> TensorFlow
<> Caffe
<> The Microsoft Cognitive Toolkit/CNTK
<> Torch/PyTorch
<> MXNet
<> Chainer
<> Keras

Posted Date:- 2022-02-15 12:07:26

In training a neural network, you notice that the loss does not decrease in the few starting epochs. What could be the reason?

<> The reasons for this could be:

<> The learning is rate is low
<> Regularization parameter is high
<> Stuck at local minima

Posted Date:- 2022-02-15 12:06:49

What is Dropout?

Dropout is a regularization technique to avoid overfitting thus increasing the generalizing power. Generally, we should use a small dropout value of 20%-50% of neurons with 20% providing a good starting point. A probability too low has minimal effect and a value too high results in under-learning by the network.

Use a larger network. You are likely to get better performance when dropout is used on a larger network, giving the model more of an opportunity to learn independent representations.

Posted Date:- 2022-02-15 12:06:08

What are the Hperparameteres? Name a few used in any Neural Network.

Hyperparameters are the variables which determine the network structure(Eg: Number of Hidden Units) and the variables which determine how the network is trained(Eg: Learning Rate). Hyperparameters are set before training.

<> Number of Hidden Layers
<> Network Weight Initialization
<> Activation Function
<> Learning Rate
<> Momentum
<> Number of Epochs
<> Batch Size

Posted Date:- 2022-02-15 12:05:18

What’s the difference between a feed-forward and a backpropagation neural network?

A Feed-Forward Neural Network is a type of Neural Network architecture where the connections are “fed forward”, i.e. do not form cycles. The term “Feed-Forward” is also used when you input something at the input layer and it travels from input to hidden and from hidden to the output layer.

Backpropagation is a training algorithm consisting of 2 steps:

<> Feed-Forward the values.
<> Calculate the error and propagate it back to the earlier layers.

So to be precise, forward-propagation is part of the backpropagation algorithm but comes before back-propagating.

Posted Date:- 2022-02-15 12:04:03

Why is Weight Initialization important in Neural Networks?

Weight initialization is one of the very important steps. A bad weight initialization can prevent a network from learning but good weight initialization helps in giving a quicker convergence and a better overall error.

Biases can be generally initialized to zero. The rule for setting the weights is to be close to zero without being too small.

Posted Date:- 2022-02-15 12:02:43

Which is Better Deep Networks or Shallow ones? and Why?

Both the Networks, be it shallow or Deep are capable of approximating any function. But what matters is how precise that network is in terms of getting the results. A shallow network works with only a few features, as it can’t extract more. But a deep network goes deep by computing efficiently and working on more features/parameters.

Posted Date:- 2022-02-15 12:02:00

What Is Data Normalization And Why Do We Need It?

Data normalization is very important preprocessing step, used to rescale values to fit in a specific range to assure better convergence during backpropagation. In general, it boils down to subtracting the mean of each data point and dividing by its standard deviation.

These were some basic Deep Learning Interview Questions. Now, let’s move on to some advanced ones.

Posted Date:- 2022-02-15 12:01:38

What are the different parts of a multi-layer perceptron?

Input Nodes: The Input nodes provide information from the outside world to the network and are together referred to as the “Input Layer”. No computation is performed in any of the Input nodes – they just pass on the information to the hidden nodes.

Hidden Nodes: The Hidden nodes perform computations and transfer information from the input nodes to the output nodes. A collection of hidden nodes forms a “Hidden Layer”. While a network will only have a single input layer and a single output layer, it can have zero or multiple Hidden Layers.
Output Nodes: The Output nodes are collectively referred to as the “Output Layer” and are responsible for computations and transferring information from the network to the outside world.

Posted Date:- 2022-02-15 12:01:20

What is a Multi-Layer-Perceptron

A multilayer perceptron (MLP) is a deep, artificial neural network. It is composed of more than one perceptron. They are composed of an input layer to receive the signal, an output layer that makes a decision or prediction about the input, and in between those two, an arbitrary number of hidden layers that are the true computational engine of the MLP.

Posted Date:- 2022-02-15 12:00:51

What are the shortcomings of a single layer perceptron?

Well, there are two major problems:

<> Single-Layer Perceptrons cannot classify non-linearly separable data points.
<> Complex problems, that involve a lot of parameters cannot be solved by Single-Layer Perceptrons

Posted Date:- 2022-02-15 11:53:33

What are the steps for using a gradient descent algorithm?

<> Initialize random weight and bias.
<> Pass an input through the network and get values from the output layer.
<> Calculate the error between the actual value and the predicted value.
<> Go to each neuron which contributes to the error and then change its respective values to reduce the error.
<> Reiterate until you find the best weights of the network.

Posted Date:- 2022-02-15 11:53:03

What are the benefits of mini-batch gradient descent?

<> This is more efficient compared to stochastic gradient descent.
<> The generalization by finding the flat minima.
<> Mini-batches allows help to approximate the gradient of the entire training set which helps us to avoid local minima.

Posted Date:- 2022-02-15 11:51:59

What is gradient descent?

Gradient descent is an optimization algorithm used to minimize some function by iteratively moving in the direction of steepest descent as defined by the negative of the gradient.

Stochastic Gradient Descent: Uses only a single training example to calculate the gradient and update parameters.

Batch Gradient Descent: Calculate the gradients for the whole dataset and perform just one update at each iteration.

Mini-batch Gradient Descent: Mini-batch gradient is a variation of stochastic gradient descent where instead of single training example, mini-batch of samples is used. It’s one of the most popular optimization algorithms.

Posted Date:- 2022-02-15 11:50:19

What is the significance of a Cost/Loss function?

A cost function is a measure of the accuracy of the neural network with respect to a given training sample and expected output. It provides the performance of a neural network as a whole. In deep learning, the goal is to minimize the cost function. For that, we use the concept of gradient descent.

Posted Date:- 2022-02-15 11:50:02

Explain Learning of a Perceptron.

Initializing the weights and threshold.
Provide the input and calculate the output.
Update the weights.
Repeat Steps 2 and 3

Posted Date:- 2022-02-15 11:49:40

What are the activation functions?

Activation function translates the inputs into outputs. Activation function decides whether a neuron should be activated or not by calculating the weighted sum and further adding bias with it. The purpose of the activation function is to introduce non-linearity into the output of a neuron.

There can be many Activation functions like:

<> Linear or Identity
<> Unit or Binary Step
<> Sigmoid or Logistic
<> Tanh
<> ReLU
<> Softmax

Posted Date:- 2022-02-15 11:49:21

What is the role of weights and bias?

For a perceptron, there can be one more input called bias. While the weights determine the slope of the classifier line, bias allows us to shift the line towards left or right. Normally bias is treated as another weighted input with the input value x0.

Posted Date:- 2022-02-15 11:48:41

What is Perceptron? And How does it Work?

If we focus on the structure of a biological neuron, it has dendrites which are used to receive inputs. These inputs are summed in the cell body and using the Axon it is passed on to the next biological neuron as shown below.

<> Dendrite: Receives signals from other neurons
<> Cell Body: Sums all the inputs
<> Axon: It is used to transmit signals to the other cells

Posted Date:- 2022-02-15 11:48:23

Do you think Deep Learning is Better than Machine Learning? If so, why?

Though traditional ML algorithms solve a lot of our cases, they are not useful while working with high dimensional data, that is where we have a large number of inputs and outputs. For example, in the case of handwriting recognition, we have a large amount of input where we will have a different type of inputs associated with different type of handwriting.

The second major challenge is to tell the computer what are the features it should look for that will play an important role in predicting the outcome as well as to achieve better accuracy while doing so.

Posted Date:- 2022-02-15 11:47:25

Which deep learning algorithm is the best for face detection?

Face identification may be accomplished using a variety of machine learning methods, but the best ones use Convolutional Neural Networks and deep learning. The following are some notable face detection algorithms: FaceNet, Probablisit, Face Embedding, ArcFace, Cosface, and Spherface.

Posted Date:- 2022-02-15 11:33:04

Explain Stochastic Gradient Descent. How is it different from Batch Gradient Descent ?

Stochastic Gradient Descent: Stochastic Gradient Descent seeks to tackle the major difficulty with Batch Gradient Descent, which is the use of the entire training set to calculate gradients at each step. It is stochastic in nature, which means it chooses up a "random" instance of training data at each step and then computes the gradient, which is significantly faster than Batch Gradient Descent because there are much fewer data to modify at once. Stochastic Gradient Descent is best suited for unconstrained optimization problems. The stochastic nature of SGD has a drawback in that once it gets close to the minimum value, it doesn't settle down and instead bounces around, giving us a good but not optimal value for model parameters. This can be solved by lowering the learning rate at each step, which will reduce the bouncing and allow SGD to settle down at the global minimum after some time.

Posted Date:- 2022-02-15 11:28:55

Explain Batch Gradient Descent.

Batch Gradient Descent: Batch Gradient Descent entails computation (involved in each step of gradient descent) over the entire training set at each step and hence it is highly slow on very big training sets. As a result, Batch Gradient Descent becomes extremely computationally expensive. This is ideal for error manifolds that are convex or somewhat smooth. Batch Gradient Descent also scales nicely as the number of features grows.

Posted Date:- 2022-02-15 11:28:20

In a Convolutional Neural Network (CNN), how can you fix the constant validation accuracy?

When training any neural network, constant validation accuracy is a common issue because the network just remembers the sample, resulting in an over-fitting problem. Over-fitting a model indicates that the neural network model performs admirably on the training sample, but the model's performance deteriorates on the validation set. Following are some ways for improving CNN's constant validation accuracy:

It is always a good idea to split the dataset into three sections: training, validation, and testing.
When working with limited data, this difficulty can be handled by experimenting with the neural network's parameters.

<> By increasing the training dataset's size.
<> By using batch normalization.
<> By implementing regularization
<> By reducing the complexity of the network

Posted Date:- 2022-02-15 11:27:51

Explain the difference between a shallow network and a deep network.

A hidden layer, as well as input and output layers, are present in every neural network. Shallow neural networks are those that have only one hidden layer, whereas deep neural networks include numerous hidden layers. Both shallow and deep networks can fit into any function, however, shallow networks require a large number of input parameters, whereas deep networks, because of their several layers, can fit functions with a small number of input parameters. Deep networks are currently favored over shallow networks because the model learns a new and abstract representation of the input at each layer. In comparison to shallow networks, they are also far more efficient in terms of the number of parameters and computations.

Posted Date:- 2022-02-15 11:27:05

What is a tensor in deep learning?

A tensor is a multidimensional array that represents a generalization of vectors and matrices. It is one of the key data structures used in deep learning. Tensors are represented as n-dimensional arrays of base data types. The data type of each element in the Tensor is the same, and the data type is always known. It's possible that only a portion of the shape (that is, the number of dimensions and the size of each dimension) is known. Most operations yield fully-known tensors if their inputs are likewise fully known, however, in other circumstances, the shape of a tensor can only be determined at graph execution time.

Posted Date:- 2022-02-15 11:26:37

Is it possible to train a neural network model by setting all biases to 0? Also, is it possible to train a neural network model by setting all of the weights to 0?

Yes, even if all of the biases are set to zero, the neural network model has a chance of learning.

No, training a model by setting all of the weights to 0 is impossible since the neural network will never learn to complete a task. When all weights are set to zero, the derivatives for each w remain constant, resulting in neurons learning the same features in each iteration. Any constant initialization of weights, not simply zero, is likely to generate a poor result.

Posted Date:- 2022-02-15 11:26:19

What are the advantages of transfer learning?

Following are the advantages of transfer learning :

Better initial model: In other methods of learning, you must create a model from scratch. Transfer learning is a better starting point because it allows us to perform tasks at a higher level without having to know the details of the starting model.
Higher learning rate: Because the problem has already been taught for a similar task, transfer learning allows for a faster learning rate during training.
Higher accuracy after training: Transfer learning allows a deep learning model to converge at a higher performance level, resulting in more accurate output, thanks to a better starting point and higher learning rate.

Posted Date:- 2022-02-15 11:15:24

Explain transfer learning in the context of deep learning.

Transfer learning is a learning technique that allows data scientists to use what they've learned from a previous machine learning model that was used for a similar task. The ability of humans to transfer their knowledge is used as an example in this learning. You can learn to operate other two-wheeled vehicles more simply if you learn to ride a bicycle. A model trained for autonomous automobile driving can also be used for autonomous truck driving. The features and weights can be used to train the new model, allowing it to be reused. When there is limited data, transfer learning works effectively for quickly training a model.

Posted Date:- 2022-02-15 11:14:59

What do you mean by hyperparameters in the context of deep learning?

Hyperparameters are variables that determine the network topology (for example, the number of hidden units) and how the network is trained (Eg: Learning Rate). They are set before training the model, that is, before optimizing the weights and the bias.

Following are some of the examples of hyperparameters:-

<> Number of hidden layers: With regularisation techniques, many hidden units inside a layer can boost accuracy. Underfitting may occur if the number of units is reduced.

<> Learning Rate: The learning rate is the rate at which a network's parameters are updated. The learning process is slowed by a low learning rate, but it eventually converges. A faster learning rate accelerates the learning process, but it may not converge. A declining Learning rate is usually desired.

Posted Date:- 2022-02-15 11:14:35

Explain Data Normalisation. What is the need for it?

Data Normalisation is a technique in which data is transformed in such a way that they are either dimensionless or have a similar distribution. It is also known as standardization and feature scaling. It's a pre-processing procedure for the input data that removes redundant data from the dataset.

Normalization provides each variable equal weights/importance, ensuring that no single variable biases model performance in its favour simply because it is larger. It vastly improves model precision by converting the values of numeric columns in a dataset to a similar scale without distorting the range of values.

Posted Date:- 2022-02-15 11:13:47

Explain Forward and Back Propagation in the context of deep learning.

<> Forward Propagation: The hidden layer, between the input layer and the output layer of the network, receives inputs with weights. We calculate the output of the activation at each node at each hidden layer, and this propagates to the next layer until we reach the final output layer. We go forward from the inputs to the final output layer, which is known as the forward propagation.

<> Back Propagation: It sends error information from the network's last layer to all of the weights within the network. It's a technique for fine-tuning the weights of a neural network based on the previous epoch's (i.e., iteration) error rate. By fine-tuning the weights, you may lower error rates and improve the model's generalization, making it more dependable. The process of backpropagation can be broken down into the following steps: It can generate output by propagating training data through the network. It, then, computes the error derivative for output activations using the target and output values. It can backpropagate to compute the derivative of the error in the previous layer's output activation, and so on for all hidden layers. It calculates the error derivative for weights using the previously obtained derivatives and all hidden layers. The weights are updated based on the error derivatives obtained from the next layer.

Posted Date:- 2022-02-15 11:13:26

What do you understand about gradient clipping in the context of deep learning?

Gradient Clipping is a technique for dealing with the problem of exploding gradients (a situation in which huge error gradients build up over time, resulting in massive modifications to neural network model weights during training) that happens during backpropagation. The problem of exploding gradients occurs when the gradients get excessively big during training, causing the model to become unstable. If the gradient has crossed the anticipated range, the gradient values are driven element-by-element to a specific minimum or maximum value. Gradient clipping improves numerical stability while training a neural network, but it has little effect on the performance of the model.

Posted Date:- 2022-02-15 11:12:46

What do you mean by end-to-end learning?

It's a deep learning procedure in which a model is fed raw data and the entire data is trained at the same time to create the desired result with no intermediate steps. It is a deep learning method in which all of the different steps are trained simultaneously rather than sequentially. End-to-end learning has the advantage of eliminating the requirement for implicit feature engineering, which usually results in lower bias. Driverless automobiles are an excellent example that you may use in your end-to-end learning content. They are guided by human input and are programmed to learn and interpret information automatically using a CNN to fulfill tasks. Another good example is the generation of a written transcript (output) from a recorded audio clip (input). The model here skips all of the steps in the middle, focusing instead on the fact that it can manage the entire sequence of steps and tasks.

Posted Date:- 2022-02-15 11:12:21

What are the different types of deep neural networks?

Following are the different types of deep neural networks:-

<> FeedForward Neural Network:- This is the most basic type of neural network, in which flow control starts at the input layer and moves to the output layer. These networks only have a single layer or a single hidden layer. There is no backpropagation mechanism in this network because data only flows in one way. The input layer of this network receives the sum of the weights present in the input. These networks are utilised in the computer vision-based facial recognition method.

<> Radial Basis Function Neural Network:- This type of neural network usually has more than one layer, preferably two. The relative distance from any location to the center is determined in this type of network and passed on to the next layer. In order to avoid blackouts, radial basis networks are commonly employed in power restoration systems to restore power in the shortest period possible.

<> Multi-Layer Perceptrons (MLP):- A multilayer perceptron (MLP) is a type of feedforward artificial neural network (ANN). MLPs are the simplest deep neural networks, consisting of a succession of completely linked layers. Each successive layer is made up of a collection of nonlinear functions that are the weighted sum of all the previous layer's outputs (completely linked). Speech recognition and other machine learning systems rely heavily on these networks.

Posted Date:- 2022-02-15 11:11:58

Explain what a deep neural network is.

An artificial neural network (ANN) having numerous layers between the input and output layers is known as a deep neural network (DNN). Deep neural networks are neural networks that use deep architectures. The term "deep" refers to functions that have a higher number of layers and units in a single layer. It is possible to create more accurate models by adding more and larger layers to capture higher levels of patterns. The below image depicts a deep neural network.

Posted Date:- 2022-02-15 11:10:37

What are the disadvantages of neural networks?

Following are the disadvantages of neural networks:-

<> The "black box" aspect of neural networks is a well-known disadvantage. That is, we have no idea how or why our neural network produced a certain result. When we enter a dog image into a neural network and it predicts that it is a duck, we may find it challenging to understand what prompted it to make this prediction.
<> It takes a long time to create a neural network model.
<> Neural networks models are computationally expensive to build because a lot of computations need to be done at each layer.
<> A neural network model requires significantly more data than a traditional machine learning model to train.

Posted Date:- 2022-02-15 11:09:48

What are the advantages of neural networks?

Following are the advantages of neural networks:

<> Neural networks are extremely adaptable, and they may be used for both classification and regression problems, as well as much more complex problems. Neural networks are also quite scalable. We can create as many layers as we wish, each with its own set of neurons. When there are a lot of data points, neural networks have been shown to generate the best outcomes. They are best used with non-linear data such as images, text, and so on. They can be applied to any data that can be transformed into a numerical value.

<> Once the neural network mode has been trained, they deliver output very fast. Thus, they are time-effective.

Posted Date:- 2022-02-15 11:08:24

Explain learning rate in the context of neural network models. What happens if the learning rate is too high or too low?

Learning rate is a number that ranges from 0 to 1. It is one of the most important tunable hyperparameters in neural network training models. The learning rate determines how quickly or slowly a neural network model adapts to a given situation and learns. A higher learning rate value indicates that the model only needs a few training epochs and produces rapid changes, whereas a lower learning rate indicates that the model may take a long time to converge or may never converge and become stuck on a poor solution. As a result, it is recommended that a good learning rate value be established by trial and error rather than using a learning rate that is too low or too high.

In the above image, we can clearly see that a big learning rate leads us to move away from the desired output. However, having a small learning rate leads us to the desired output eventually.

Posted Date:- 2022-02-15 11:07:36

What are the applications of deep learning?

Following are some of the applications of deep learning:-

<>Pattern recognition and natural language processing.
<>Recognition and processing of images.
<> Automated translation.
<> Analysis of sentiment.
<>System for answering questions.
<> Classification and Detection of Objects.
<>Handwriting Generation by Machine.
<>Automated text generation.
<>Colorization of Black and White images.

Posted Date:- 2022-02-15 11:05:05

Differentiate between AI, Machine Learning and Deep Learning.

Neural Networks are artificial systems that have a lot of resemblance to the biological neural networks in the human body. A neural network is a set of algorithms that attempts to recognize underlying relationships in a batch of data using a method that mimics how the human brain works. Without any task-specific rules, these systems learn to do tasks by being exposed to a variety of datasets and examples. The notion is that instead of being programmed with a pre-coded understanding of these datasets, the system derives identifying traits from the data it is fed to. Neural networks are built on threshold logic computational models. Because neural networks can adapt to changing input, they can produce the best possible outcome without requiring the output criteria to be redesigned.

Posted Date:- 2022-02-15 10:59:12

Search
R4R Team
R4R provides DNN framework Freshers questions and answers (DNN framework Interview Questions and Answers) .The questions on R4R.in website is done by expert team! Mock Tests and Practice Papers for prepare yourself.. Mock Tests, Practice Papers,DNN framework Interview Questions for freshers,DNN framework Freshers & Experienced Interview Questions and Answers,DNN framework Objetive choice questions and answers,DNN framework Multiple choice questions and answers,DNN framework objective, DNN framework questions , DNN framework answers,DNN framework MCQs questions and answers R4r provides Python,General knowledge(GK),Computer,PHP,SQL,Java,JSP,Android,CSS,Hibernate,Servlets,Spring etc Interview tips for Freshers and Experienced for DNN framework fresher interview questions ,DNN framework Experienced interview questions,DNN framework fresher interview questions and answers ,DNN framework Experienced interview questions and answers,tricky DNN framework queries for interview pdf,complex DNN framework for practice with answers,DNN framework for practice with answers You can search job and get offer latters by studing r4r.in .learn in easy ways .