Tuesday, May 14, 2013

Hadoop Interview Question - MapReduce

What is MapReduce?

It is a framework or a programming model that is used for processing large data sets over clusters of computers using distributed programming.

What are ‘maps’ and ‘reduces’?

Maps‘ and ‘Reduces‘ are two phases of solving a query in HDFS. ‘Map’ is responsible to read data from input location, and based on the input type, it will generate a key value pair, that is, an intermediate output in local machine. ’Reducer’ is responsible to process the intermediate output received from the mapper and generate the final output.

What are the four basic parameters of a mapper?

The four basic parameters of a mapper are LongWritable, text, text and IntWritable. The first two represent input parameters and the second two represent intermediate output parameters.

What are the four basic parameters of a reducer?

The four basic parameters of a reducer are text, IntWritable, text, IntWritable. The first two represent intermediate output parameters and the second two represent final output parameters.

What do the master class and the output class do?

Master is defined to update the Master or the job tracker and the output class is defined to write data onto the output location.

What is the input type/format in MapReduce by default?

By default the type input type in MapReduce is ‘text’.

Is it mandatory to set input and output type/format in MapReduce?

No, it is not mandatory to set the input and output type/format in MapReduce. By default, the cluster takes the input and the output type as ‘text’.

What does the text input format do?

In text input format, each line will create a line object, that is an hexa-decimal number. Key is considered as a line object and value is considered as a whole line text. This is how the data gets processed by a mapper. The mapper will receive the ‘key’ as a ‘LongWritable‘ parameter and value as a ‘text‘ parameter.

What does job conf class do?

MapReduce needs to logically separate different jobs running on the same cluster. ‘Job conf class‘  helps to do job level settings such as declaring a job in real environment.  It is recommended that Job name should be descriptive and represent the type of job that is being executed.

What does conf.setMapper Class do?

Conf.setMapper class sets the mapper class and all the stuff related to map job such as reading a data and generating a key-value pair out of the mapper.

What do sorting and shuffling do?

Sorting and shuffling are responsible for creating a unique key and a list of values. Making similar keys at one location is known as Sorting. And the process by which the intermediate output of the mapper is sorted and sent across to the reducers is known as Shuffling.

What does a split do?

Before transferring the data from hard disk location to map method, there is a phase or method called  the ‘Split Method‘. Split method pulls a block of data from HDFS to the framework. The Split class does not write anything, but reads data from the block and pass it to the mapper. Be default, Split is taken care by the framework. Split method is equal to the block size and is used to divide block into bunch of splits.

How can we change the split size if our commodity hardware has less storage space?

If our commodity hardware has less storage space, we can change the split size by writing the ‘custom splitter‘. There is a feature of customization in Hadoop which can be called from the main method.

What does a MapReduce partitioner do?

A MapReduce partitioner makes sure that all the value of a single key goes to the same reducer, thus allows evenly distribution of the map output over the reducers. It redirects the mapper output to the reducer by determining which reducer is responsible for a particular key.

How is Hadoop different from other data processing tools?

In Hadoop, based upon your requirements, you can increase or decrease the number of mappers without bothering about the volume of data to be processed. this is the beauty of parallel processing in contrast to the other data processing tools available.

Can we rename the output file?

Yes we can rename the output file by implementing multiple format output class.

Why we cannot do aggregation (addition) in a mapper? Why we require reducer for that?

We cannot do aggregation (addition) in a mapper because, sorting is not done in a mapper. Sorting happens only on the reducer side. Mapper method initialization depends upon each input split. While doing aggregation, we will lose the value of the previous instance. For each row, a new mapper will get initialized. For each row, input split again gets divided into mapper,  thus we do not have a track of the previous row value.

What is Streaming?

Streaming is a feature with Hadoop framework that allows us to do programming using MapReduce in any programming language which can accept standard input and can produce standard output. It could be Perl, Python, Ruby and not necessarily be Java. However, customization in MapReduce can only be done using Java and not any other programming language.

What is a Combiner?

A ‘Combiner’ is a mini reducer that performs the local reduce task. It receives the input from the mapper on a particular node and sends the output to the reducer. Combiners help in enhancing the efficiency of MapReduce by reducing the quantum of data that is required to be sent to the reducers.

What is the difference between an HDFS Block and Input Split?

HDFS Block is the physical division of the data and Input Split is the logical division of the data.

What happens in a textinputformat?

In textinputformat, each line in the text file is a record. Key is the byte offset of the line and value is the content of the line. For instance, Key: longWritable, value: text.

What do you know about keyvaluetextinputformat?

In keyvaluetextinputformat, each line in the text file is a ‘record‘. The first separator character divides each line. Everything before the separator is the key and everything after the separator is the value. For instance, Key: text, value: text.

What do you know about Sequencefileinputformat?

Sequencefileinputformat is an input format for reading in sequence files. Key and value are user defined. It is a specific compressed binary file format which is optimized for passing the data between the output of one MapReduce job to the input of some other MapReduce job.

What do you know about Nlineoutputformat?

Nlineoutputformat splits ‘n’ lines of input as one split.


What is MapReduce?
It is a framework or a programming model that is used for processing large data sets over clusters of computers using distributed programming.
What are ‘maps’ and ‘reduces’?
‘Maps’ and ‘Reduces’ are two phases of solving a query in HDFS. ‘Map’ is responsible to read data from input location, and based on the input type, it will generate a key value pair,that is, an intermediate output in local machine.’Reducer’ is responsible to process the intermediate output received from the mapper and generate the final output.
What are the four basic parameters of a mapper?
The four basic parameters of a mapper are LongWritable, text, text and IntWritable. The first two represent input parameters and the second two represent intermediate output parameters.
What are the four basic parameters of a reducer?
The four basic parameters of a reducer are Text, IntWritable, Text, IntWritable.The first two represent intermediate output parameters and the second two represent final output parameters.
What do the master class and the output class do?
Master is defined to update the Master or the job tracker and the output class is defined to write data onto the output location.
What is the input type/format in MapReduce by default?
By default the type input type in MapReduce is ‘text’.
Is it mandatory to set input and output type/format in MapReduce?
No, it is not mandatory to set the input and output type/format in MapReduce. By default, the cluster takes the input and the output type as ‘text’.
What does the text input format do?
In text input format, each line will create a line object, that is an hexa-decimal number. Key is considered as a line object and value is considered as a whole line text. This is how the data gets processed by a mapper. The mapper will receive the ‘key’ as a ‘LongWritable’ parameter and value as a ‘Text’ parameter.
What does job conf class do?
MapReduce needs to logically separate different jobs running on the same cluster. ‘Job conf class’ helps to do job level settings such as declaring a job in real environment. It is recommended that Job name should be descriptive and represent the type of job that is being executed.
What does conf.setMapper Class do?
Conf.setMapperclass sets the mapper class and all the stuff related to map job such as reading a data and generating a key-value pair out of the mapper.
What do sorting and shuffling do?
Sorting and shuffling are responsible for creating a unique key and a list of values.Making similar keys at one location is known as Sorting. And the process by which the intermediate output of the mapper is sorted and sent across to the reducers is known as Shuffling.
 What does a split do?
Before transferring the data from hard disk location to map method, there is a phase or method called the ‘Split Method’. Split method pulls a block of data from HDFS to the framework. The Split class does not write anything, but reads data from the block and pass it to the mapper.Be default, Split is taken care by the framework. Split method is equal to the block size and is used to divide block into bunch of splits.
How can we change the split size if our commodity hardware has less storage space?
If our commodity hardware has less storage space, we can change the split size by writing the ‘custom splitter’. There is a feature of customization in Hadoop which can be called from the main method.
What does a MapReduce partitioner do?
A MapReduce partitioner makes sure that all the value of a single key goes to the same reducer, thus allows evenly distribution of the map output over the reducers. It redirects the mapper output to the reducer by determining which reducer is responsible for a particular key.
How is Hadoop different from other data processing tools?
In Hadoop, based upon your requirements, you can increase or decrease the number of mappers without bothering about the volume of data to be processed. This is the beauty of parallel processing in contrast to the other data processing tools available.
Can we rename the output file?
Yes we can rename the output file by implementing multiple format output class.
Why we cannot do aggregation (addition) in a mapper? Why we require reducer for that?
We cannot do aggregation (addition) in a mapper because, sorting is not done in a mapper. Sorting happens only on the reducer side. Mapper method initialization depends upon each input split. While doing aggregation, we will lose the value of the previous instance. For each row, a new mapper will get initialized. For each row, inputsplit again gets divided into mapper, thus we do not have a track of the previous row value.
What is Streaming?
Streaming is a feature with Hadoop framework that allows us to do programming using MapReduce in any programming language which can accept standard input and can produce standard output. It could be Perl, Python, Ruby and not necessarily be Java. However, customization in MapReduce can only be done using Java and not any other programming language.
What is a Combiner?
A ‘Combiner’ is a mini reducer that performs the local reduce task. It receives the input from the mapper on a particular node and sends the output to the reducer. Combiners help in enhancing the efficiency of MapReduce by reducing the quantum of data that is required to be sent to the reducers.
What is the difference between an HDFS Block and Input Split?
HDFS Block is the physical division of the data and Input Split is the logical division of the data.
What happens in a TextInputFormat?
In TextInputFormat, each line in the text file is a record. Key is the byte offset of the line and value is the content of the line.
For instance,Key: LongWritable, value: Text.
What do you know about KeyValueTextInputFormat?
In KeyValueTextInputFormat, each line in the text file is a ‘record’. The first separator character divides each line. Everything before the separator is the key and everything after the separator is the value.
For instance, Key: Text, value: Text.
What do you know about SequenceFileInputFormat?
SequenceFileInputFormat is an input format for reading in sequence files. Key and value are user defined. It is a specific compressed binary file format which is optimized for passing the data between the output of one MapReduce job to the input of some other MapReduce job.
What do you know about NLineOutputFormat?
NLineOutputFormat splits ‘n’ lines of input as one split.
What is a JobTracker in Hadoop? How many instances of JobTracker run on a Hadoop Cluster?
JobTracker is the daemon service for submitting and tracking MapReduce jobs in Hadoop. There is only One Job Tracker process run on any hadoop cluster. Job Tracker runs on its own JVM process. In a typical production cluster its run on a separate machine. Each slave node is configured with job tracker node location. The JobTracker is single point of failure for the Hadoop MapReduce service. If it goes down, all running jobs are halted. JobTracker in Hadoop performs following actions(from Hadoop Wiki:)
Client applications submit jobs to the Job tracker.
The JobTracker talks to the NameNode to determine the location of the data
The JobTracker locates TaskTracker nodes with available slots at or near the data
The JobTracker submits the work to the chosen TaskTracker nodes.
The TaskTracker nodes are monitored. If they do not submit heartbeat signals often enough, they are deemed to have failed and the work is scheduled on a different TaskTracker.
A TaskTracker will notify the JobTracker when a task fails. The JobTracker decides what to do then: it may resubmit the job elsewhere, it may mark that specific record as something to avoid, and it may may even blacklist the TaskTracker as unreliable.
When the work is completed, the JobTracker updates its status.
Client applications can poll the JobTracker for information.

How JobTracker schedules a task?
The TaskTrackers send out heartbeat messages to the JobTracker, usually every few minutes, to reassure the JobTracker that it is still alive. These message also inform the JobTracker of the number of available slots, so the JobTracker can stay up to date with where in the cluster work can be delegated. When the JobTracker tries to find somewhere to schedule a task within the MapReduce operations, it first looks for an empty slot on the same server that hosts the DataNode containing the data, and if not, it looks for an empty slot on a machine in the same rack.
What is a Task Tracker in Hadoop? How many instances of TaskTracker run on a Hadoop Cluster
A TaskTracker is a slave node daemon in the cluster that accepts tasks (Map, Reduce and Shuffle operations) from a JobTracker. There is only One Task Tracker process run on any hadoop slave node. Task Tracker runs on its own JVM process. Every TaskTracker is configured with a set of slots, these indicate the number of tasks that it can accept. The TaskTracker starts a separate JVM processes to do the actual work (called as Task Instance) this is to ensure that process failure does not take down the task tracker. The TaskTracker monitors these task instances, capturing the output and exit codes. When the Task instances finish, successfully or not, the task tracker notifies the JobTracker. The TaskTrackers also send out heartbeat messages to the JobTracker, usually every few minutes, to reassure the JobTracker that it is still alive. These message also inform the JobTracker of the number of available slots, so the JobTracker can stay up to date with where in the cluster work can be delegated.
What is a Task instance in Hadoop? Where does it run?
Task instances are the actual MapReduce jobs which are run on each slave node. The TaskTracker starts a separate JVM processes to do the actual work (called as Task Instance) this is to ensure that process failure does not take down the task tracker. Each Task Instance runs on its own JVM process. There can be multiple processes of task instance running on a slave node. This is based on the number of slots configured on task tracker. By default a new task instance JVM process is spawned for a task.

How many Daemon processes run on a Hadoop system?
Hadoop is comprised of five separate daemons. Each of these daemon run in its own JVM. Following 3 Daemons run on Master nodes NameNode – This daemon stores and maintains the metadata for HDFS. Secondary NameNode – Performs housekeeping functions for the NameNode. JobTracker – Manages MapReduce jobs, distributes individual tasks to machines running the Task Tracker. Following 2 Daemons run on each Slave nodes DataNode – Stores actual HDFS data blocks. TaskTracker – Responsible for instantiating and monitoring individual Map and Reduce tasks.
What is configuration of a typical slave node on Hadoop cluster? How many JVMs run on a slave node?
Single instance of a Task Tracker is run on each Slave node. Task tracker is run as a separate JVM process.
Single instance of a DataNode daemon is run on each Slave node. DataNode daemon is run as a separate JVM process.
One or Multiple instances of Task Instance is run on each slave node. Each task instance is run as a separate JVM process. The number of Task instances can be controlled by configuration. Typically a high end machine is configured to run more task instances.
What is the difference between HDFS and NAS ?
The Hadoop Distributed File System (HDFS) is a distributed file system designed to run on commodity hardware. It has many similarities with existing distributed file systems. However, the differences from other distributed file systems are significant. Following are differences between HDFS and NAS
In HDFS Data Blocks are distributed across local drives of all machines in a cluster. Whereas in NAS data is stored on dedicated hardware.
HDFS is designed to work with MapReduce System, since computation are moved to data. NAS is not suitable for MapReduce since data is stored seperately from the computations.
HDFS runs on a cluster of machines and provides redundancy usinga replication protocal. Whereas NAS is provided by a single machine therefore does not provide data redundancy.
How NameNode Handles data node failures?
NameNode periodically receives a Heartbeat and a Blockreport from each of the DataNodes in the cluster. Receipt of a Heartbeat implies that the DataNode is functioning properly. A Blockreport contains a list of all blocks on a DataNode. When NameNode notices that it has not recieved a hearbeat message from a data node after a certain amount of time, the data node is marked as dead. Since blocks will be under replicated the system begins replicating the blocks that were stored on the dead datanode. The NameNode Orchestrates the replication of data blocks from one datanode to another. The replication data transfer happens directly between datanodes and the data never passes through the namenode.
Does MapReduce programming model provide a way for reducers to communicate with each other? In a MapReduce job can a reducer communicate with another reducer?
Nope, MapReduce programming model does not allow reducers to communicate with each other. Reducers run in isolation.
Can I set the number of reducers to zero?
Yes, Setting the number of reducers to zero is a valid configuration in Hadoop. When you set the reducers to zero no reducers will be executed, and the output of each mapper will be stored to a separate file on HDFS. [This is different from the condition when reducers are set to a number greater than zero and the Mappers output (intermediate data) is written to the Local file system(NOT HDFS) of each mappter slave node.]
Where is the Mapper Output (intermediate kay-value data) stored ?
The mapper output (intermediate data) is stored on the Local file system (NOT HDFS) of each individual mapper nodes. This is typically a temporary directory location which can be setup in config by the hadoop administrator. The intermediate data is cleaned up after the Hadoop Job completes.
What are combiners? When should I use a combiner in my MapReduce Job?
Combiners are used to increase the efficiency of a MapReduce program. They are used to aggregate intermediate map output locally on individual mapper outputs. Combiners can help you reduce the amount of data that needs to be transferred across to the reducers. You can use your reducer code as a combiner if the operation performed is commutative and associative. The execution of combiner is not guaranteed, Hadoop may or may not execute a combiner. Also, if required it may execute it more then 1 times. Therefore your MapReduce jobs should not depend on the combiners execution.
What is Writable & WritableComparable interface?
org.apache.hadoop.io.Writable is a Java interface. Any key or value type in the Hadoop Map-Reduce framework implements this interface. Implementations typically implement a static read(DataInput) method which constructs a new instance, calls readFields(DataInput) and returns the instance.
org.apache.hadoop.io.WritableComparable is a Java interface. Any type which is to be used as a key in the Hadoop Map-Reduce framework should implement this interface. WritableComparable objects can be compared to each other using Comparators.
What is the Hadoop MapReduce API contract for a key and value Class?
The Key must implement the org.apache.hadoop.io.WritableComparable interface.
The value must implement the org.apache.hadoop.io.Writable interface.
What is a IdentityMapper and IdentityReducer in MapReduce ?
org.apache.hadoop.mapred.lib.IdentityMapper Implements the identity function, mapping inputs directly to outputs. If MapReduce programmer do not set the Mapper Class using JobConf.setMapperClass then IdentityMapper.class is used as a default value.
org.apache.hadoop.mapred.lib.IdentityReducer Performs no reduction, writing all input values directly to the output. If MapReduce programmer do not set the Reducer Class using JobConf.setReducerClass then IdentityReducer.class is used as a default value.

What is the meaning of speculative execution in Hadoop? Why is it important?
Speculative execution is a way of coping with individual Machine performance. In large clusters where hundreds or thousands of machines are involved there may be machines which are not performing as fast as others. This may result in delays in a full job due to only one machine not performaing well. To avoid this, speculative execution in hadoop can run multiple copies of same map or reduce task on different slave nodes. The results from first node to finish are used.
When is the reducers are started in a MapReduce job?
In a MapReduce job reducers do not start executing the reduce method until the all Map jobs have completed. Reducers start copying intermediate key-value pairs from the mappers as soon as they are available. The programmer defined reduce method is called only after all the mappers have finished.
If reducers do not start before all mappers finish then why does the progress on MapReduce job shows something like Map(50%) Reduce(10%)? Why reducers progress percentage is displayed when mapper is not finished yet?
Reducers start copying intermediate key-value pairs from the mappers as soon as they are available. The progress calculation also takes in account the processing of data transfer which is done by reduce process, therefore the reduce progress starts showing up as soon as any intermediate key-value pair for a mapper is available to be transferred to reducer. Though the reducer progress is updated still the programmer defined reduce method is called only after all the mappers have finished.
What is HDFS ? How it is different from traditional file systems?
HDFS, the Hadoop Distributed File System, is responsible for storing huge data on the cluster. This is a distributed file system designed to run on commodity hardware. It has many similarities with existing distributed file systems. However, the differences from other distributed file systems are significant.
HDFS is highly fault-tolerant and is designed to be deployed on low-cost hardware.
HDFS provides high throughput access to application data and is suitable for applications that have large data sets.
HDFS is designed to support very large files. Applications that are compatible with HDFS are those that deal with large data sets. These applications write their data only once but they read it one or more times and require these reads to be satisfied at streaming speeds. HDFS supports write-once-read-many semantics on files.
What is HDFS Block size? How is it different from traditional file system block size?
In HDFS data is split into blocks and distributed across multiple nodes in the cluster. Each block is typically 64Mb or 128Mb in size. Each block is replicated multiple times. Default is to replicate each block three times. Replicas are stored on different nodes. HDFS utilizes the local file system to store each HDFS block as a separate file. HDFS Block size can not be compared with the traditional file system block size.
What is a NameNode? How many instances of NameNode run on a Hadoop Cluster?
The NameNode is the centerpiece of an HDFS file system. It keeps the directory tree of all files in the file system, and tracks where across the cluster the file data is kept. It does not store the data of these files itself. There is only One NameNode process run on any hadoop cluster. NameNode runs on its own JVM process. In a typical production cluster its run on a separate machine. The NameNode is a Single Point of Failure for the HDFS Cluster. When the NameNode goes down, the file system goes offline. Client applications talk to the NameNode whenever they wish to locate a file, or when they want to add/copy/move/delete a file. The NameNode responds the successful requests by returning a list of relevant DataNode servers where the data lives.
What is a DataNode? How many instances of DataNode run on a Hadoop Cluster?
A DataNode stores data in the Hadoop File System HDFS. There is only One DataNode process run on any hadoop slave node. DataNode runs on its own JVM process. On startup, a DataNode connects to the NameNode. DataNode instances can talk to each other, this is mostly during replicating data.
How the Client communicates with HDFS?
The Client communication to HDFS happens using Hadoop HDFS API. Client applications talk to the NameNode whenever they wish to locate a file, or when they want to add/copy/move/delete a file on HDFS. The NameNode responds the successful requests by returning a list of relevant DataNode servers where the data lives. Client applications can talk directly to a DataNode, once the NameNode has provided the location of the data.
How the HDFS Blocks are replicated?
HDFS is designed to reliably store very large files across machines in a large cluster. It stores each file as a sequence of blocks; all blocks in a file except the last block are the same size. The blocks of a file are replicated for fault tolerance. The block size and replication factor are configurable per file. An application can specify the number of replicas of a file. The replication factor can be specified at file creation time and can be changed later. Files in HDFS are write-once and have strictly one writer at any time. The NameNode makes all decisions regarding replication of blocks. HDFS uses rack-aware replica placement policy. In default configuration there are total 3 copies of a datablock on HDFS, 2 copies are stored on datanodes on same rack and 3rd copy on a different rack.




91 comments:

  1. Wonderful blog & good post.Its really helpful for me, awaiting for more new post. Keep Blogging!


    Hadoop Course in Chennai

    ReplyDelete
  2. Oracle Training in ChennaiFebruary 27, 2015 at 2:58 AM

    Awesome Blogs share more information and refer the link Oracle Training in Chennai

    ReplyDelete
  3. Oracle Training

    The information you posted here is useful to make my career better keep updates..If anyone want to become an oracle certified professional reach FITA Oracle Training Institutes in Chennai, which offers Best Oracle Training Chennai with years of experienced professionals.

    ReplyDelete
  4. This comment has been removed by a blog administrator.

    ReplyDelete
  5. I get a lot of great information from this blog. Thank you for your sharing this informative blog. AWS course chennai | AWS Certification in chennai | AWS Certification chennai

    ReplyDelete
  6. Your posts is really helpful for me.Thanks for your wonderful post. I am very happy to read your post.VMWare Training in chennai | VMWare Training chennai | VMWare course in chennai

    ReplyDelete
  7. Nice article i was really impressed by seeing this article, it was very interesting and it is very useful for me.. Cloud Computing Training in chennai | Cloud Computing Training chennai | Cloud Computing Course in chennai | Cloud Computing Course chennai

    ReplyDelete

  8. I am following your blog from the beginning, it was so distinct & I had a chance to collect conglomeration of information that helps me a lot to improvise myself. JAVA Training in Chennai |
    JAVA Training Institutes in Chennai | java training in velachery

    ReplyDelete
  9. Thanks for this collective interview questions, it was too informative to update my skills in Hadoop, keep posting...
    Regards,
    Web design courses in Chennai | Web design institutes in Chennai

    ReplyDelete
  10. There are lots of information about latest technology and how to get trained in them, like Hadoop Training in Chennai have spread around the web, but this is a unique one according to me. The strategy you have updated here will make me to get trained in future technologies Hadoop Training in Chennai By the way you are running a great blog. Thanks for sharing this.

    ReplyDelete
  11. I found some useful information in your blog, it was awesome to read, thanks for sharing this great content to my vision, keep sharing..

    Oracle Training in chennai

    ReplyDelete
  12. This information is impressive..I am inspired with your post writing style & how continuously you describe this topic. After reading your post, thanks for taking the time to discuss this, I feel happy about it and I love learning more about this topic...

    Green Technologies In Chennai

    ReplyDelete
  13. Thanks of sharing this post…Python is the fastest growing language that helps to get your dream job in a best way, so if you wants to become a expertise in python get some training on that language.
    Regards,
    Python Training Institutes in Chennai|python training in chennai|Python Course in Chennai

    ReplyDelete
  14. There are lots of information about latest technology and how to get trained in them, like Hadoop Training in Chennai have spread around the web, but this is a unique one according to me. The strategy you have updated here will make me to get trained in future technologies Hadoop Training in Chennai
    By the way you are running a great blog. Thanks for sharing this blogs..

    ReplyDelete

  15. Pega Training in Chennai
    This post is really nice and informative. The explanation given is really comprehensive and informative..

    ReplyDelete
  16. You have stated definite points about the technology that is discussed above. The content published here derives a valuable inspiration to technology geeks like me. Moreover you are running a great blog. Many thanks for sharing this in here.

    Salesforce Training
    Salesforce training in chennai
    Salesforce training institutes in chennai

    ReplyDelete
  17. have to learned to lot of information about java Gain the knowledge and hands-on experience you need to successfully design, build and deploy applications with java.
    Java Training in Chennai

    ReplyDelete
  18. Hybernet is a framework Tool. If you are interested in hybernet training, our real time working.
    Hibernate Training in Chennai,hibernate training in Chennai

    ReplyDelete

  19. if learned in this site.what are the tools using in sql server environment and in warehousing have the solution thank ..Msbi training In Chennai

    ReplyDelete
  20. very nice blogs!!! i have to learning for lot of information for this sites...Sharing for wonderful information.Thanks for sharing this valuable information to our vision. You have posted a trust worthy blog keep sharing.
    Informatica Training in Chennai

    ReplyDelete
  21. I am reading your post from the beginning, it was so interesting to read & I feel thanks to you for posting such a good blog, keep updates regularly.
    informatica training in chennai

    ReplyDelete
  22. Truely a very good article on how to handle the future technology. This content creates a new hope and inspiration within me. Thanks for sharing article like this. The way you have stated everything above is quite awesome. Keep blogging like this. Thanks :)

    Software testing training in chennai | Software testing course in chennai | Testing training in chennai

    ReplyDelete
  23. Thanks Admin for sharing such a useful post, I hope it’s useful to many individuals for developing their skill to get good career.
    Regards,

    Oracle Training in Chennai|Oracle DBA Training in Chennai|Oracle Training Institutes in Chennai

    ReplyDelete
  24. Great post and informative blog.it was awesome to read, thanks for sharing this great content to my vision.
    Informatica Training In Chennai
    Hadoop Training In Chennai
    Oracle Training In Chennai
    SAS Training In Chennai

    ReplyDelete
  25. Superb explanation & it's too clear to understand the concept as well, keep sharing admin with some updated information with right examples.
    Regards,
    Angularjs training in chennai|Angularjs course in chennai

    ReplyDelete
  26. Very nice collection of question and answers thank you for sharing this useful post. Know more about Big Data Hadoop Training in Bangalore

    ReplyDelete
  27. Great collection of questions and answers thank you for sharing. We provide Big Data Hadoop Online Training

    ReplyDelete
  28. this type of questions are really helpful to cracking the interview and explanation are very clear so easy to understand..

    hadoop training institute in adyar | big data training institute in adyar | hadoop training in chennai adyar | big data training in chennai adyar

    ReplyDelete
  29. Really it was an awesome article...very interesting to read..You have provided an nice article....Thanks for sharing..
    Web Design Company
    Web Development Company

    ReplyDelete

  30. Being new to the blogging world I feel like there is still so much to learn. Your tips helped to clarify a few things for me as well
    iOS App Development Company
    iOS App Development Company

    ReplyDelete

  31. I have seen a lot of blogs and Info. on other Blogs and Web sites But in this Hadoop Blog Information is useful very thanks for sharing it........

    ReplyDelete
  32. Your hadoop interview questions and answers are really useful for me. Thanks to shared this post with me.

    ReplyDelete
  33. This comment has been removed by the author.

    ReplyDelete
  34. Very impressive and interesting blog,Today there is a great hype among the youth because Hadoop is considered as a highly available storage and processing power which is being drawn by many organizations. Hadoop training in Hyderabad

    ReplyDelete
  35. Online Assignment Help Tasmania - Australia Best Tutor is responsible for providing an excellent range of Online Assignment help Tasmania to the students pursuing different subjects as part of their studies.

    Read More : http://prsync.com/australia-best-tutor/get-good-grades-by-using-the-online-assignment-help-tasmania-2589126

    ReplyDelete
  36. Hi, Your post is quite great to view and easy way to grab the extra knowledge. Thank you for your share with us. I like to visit your site again for my future reference.
    PHP Training in Chennai
    PHP courses in chennai
    PHP Training Chennai
    PHP Course Chennai

    ReplyDelete
  37. This comment has been removed by the author.

    ReplyDelete
  38. Thank you sharing this kind of noteworthy information. Nice Post.

    Article submission sites
    Guest posting sites

    ReplyDelete
  39. Thanks for sharing this Informative content. Well explained. Got to learn new things from your Blog on
    linux training in hyderabad

    ReplyDelete
  40. This comment has been removed by the author.

    ReplyDelete
  41. QuickBooks Payroll Support phone number. Well! If you’re not able to customize employee payroll in Quickbooks while making the list QuickBooks Payroll Support Phone Number in QB and QB desktop, then see the description ahead.

    ReplyDelete
  42. installing of QuickBooks Pro is a seamless task by utilizing QQuickBooks Tech Support It is rather possible you can face trouble while installing.

    ReplyDelete
  43. QuickBooks Enterprise Techical Support Number has a team from it experts which can help you for several quickbooks technical issue releted to these :- QuickBooks Enterprise users usually are professionals.

    ReplyDelete

  44. QuickBooks Helpline Number – Inuit Inc has indeed developed an excellent software product to carry out the financial needs for the small, medium and large-sized businesses.

    ReplyDelete
  45. Creating a set-up checklist for payment both in desktop & online versions is a vital task that needs to be shown to every QuickBooks user. Hope, you liked your internet site. If any method or technology you can not understand, if that's the case your better option is which will make call us at our QuickBooks Online Payroll Contact Number.

    ReplyDelete
  46. Quickbooks Customer support serving a quantity of users daily , quite possible you certainly will hand up or need certainly to watch for few years to connect aided by the Help Desk team . Relating to statics released because of the Bing & Google search insights significantly more than 50,000 folks searching the web to find the QuickBooks Support Phone Number on a daily basis and much more than 2,000 quarries associated with Quickbooks issues and errors .

    ReplyDelete
  47. Enterprise support number offers you proper assistance if you require it. You can avail Enterprise Support using E-mail yet QuickBooks Enterprise Technical Support Number serves to be the best type of assistance. Here our experts will answer your call and offer you perfect solutions on QuickBooks Enterprise resolving all of the issues faced by you.

    ReplyDelete
  48. Our support also extends to handling those errors that usually occur when your version of QuickBooks Enterprise Support Phone Number has been infected by a malicious program like a virus or a spyware, which might have deleted system files, or damaged registry entries.

    ReplyDelete
  49. Every user will get 24/7 support services with this online technical experts using QuickBooks support contact number. When you’re stuck in times for which you can’t discover a way to eradicate a concern, all that's necessary would be to dial QuickBooks Support Phone Number. Be patient; they will certainly inevitably and instantly solve your queries.

    ReplyDelete
  50. Would you like to Update QuickBooks Pro? We now have managed to ensure it is simple for your needs at QuickBooks Support.It will always be simpler to concentrate on updated version as it helps you incorporate all the latest features in your software and assists you undergo your task uninterrupted. You will find simple steps that you need to follow. Firstly, click on file and select the chance Open & Restore. Now open the file and then click on Update Company apply for New Version. And today you may be all set.

    ReplyDelete
  51. We offer time-saving solutions for you Our Support team for QuickBooks Technical Support Numberprovides you incredible assistance in the form of amazing solutions. The grade of our services is justified because regarding the following reasons.

    ReplyDelete
  52. The error will not fix completely before you comprehend the primary cause connected with problem. As company file plays a really crucial role in account management, so that QuickBooks Support Phone Number becomes just a little tough to identify.

    ReplyDelete
  53. This installation related problem can be solved by letting the executives that are handling the Quickbooks Support Phone Number understand the details related to your license therefore the date of purchase of the product to instantly solve the put up related issue.

    ReplyDelete
  54. QuickBooks Support Phone Number is an accounting solution that is favorable for small to mid-sized businesses encapsulating all of the sections like construction, distribution, manufacturing, and retail. It gives a multi-users feature that means it is feasible for many users to use exactly the same computer to enable faster workflow. It enables businesses to help keep a track on employee information and ensures necessary consent by the workers.

    ReplyDelete
  55. Problems are inevitable and so they tend not to come with a bang. All of us at QuickBooks Support Phone Number is ready beforehand to give you customer-friendly assistance in the event that you talk with an issue using QuickBooks Pro. Many of us is skilled, talented, knowledgeable and spontaneous. Without taking most of your time, all of us gets you rid of all of the unavoidable errors of the software.

    ReplyDelete
  56. QuicKbooks Customer Support Number is an accounting solution that is favorable for small to mid-sized businesses encapsulating all of the sections like construction, distribution, manufacturing, and retail.

    ReplyDelete
  57. Supervisors at QuickBooks Support Number. have trained all of their executives to combat the issues in this software. Utilizing the introduction of modern tools and approaches to QuickBooks, you can test new techniques to carry out various business activities. Basically, this has automated several tasks that have been being done manually for a long time. There are lots of versions of QuickBooks and each one has a unique features.

    ReplyDelete
  58. Give a call at QuickBooks Support Phone Number, if you should be encountering any difficulties which are mentioned previously. Should you be facing any kind of problems with your QuickBooks, then you can certainly also make instant calls. Your queries can get resolved with no delays.

    ReplyDelete
  59. QuickBooks Enterprise has its own awesome features which can make it more reliable and efficient. Let’s see some awesome features which may have caused it is so popular. If you're also a QuickBooks user and really wants to find out more concerning this software you may possibly take a look at the QuickBooks Enterprise Tech Support Number.

    ReplyDelete
  60. Enhanced Payroll and Full-service payroll come within the Online Payroll whereas Basic, Enhanced and Assisted Payroll come under Payroll for Desktop. You could get additional information about QuickBooks Payroll Technical Support Number by dialing QuickBooks Payroll Support telephone number.

    ReplyDelete
  61. Speak to our independent AccountWizy QuickBooks Tech Support Phone Number to obtain the best advice from our united states of america based Certified ProAdvisors in order to fix business or accounting queries in addition to Quickbooks Errors quickly.

    ReplyDelete

  62. QuickBooks Support Phone Number get you one-demand technical help for QuickBooks. QuickBooks allows a number of third-party software integration. QuickBooks software integration is one of the most useful solution provided by the software to handle the accounting tasks in a simpler and precise way.

    ReplyDelete
  63. You are always able to relate with us at our QuickBooks Support Phone Number to extract the very best support services from our highly dedicated and supportive QuickBooks Support executives at any point of the time as most of us is oftentimes prepared to work with you. A lot of us is responsible and makes sure to deliver hundred percent assistance by working 24*7 to meet your requirements. Go ahead and mail us at our quickbooks support email id whenever you are in need. You might reach us via call at our toll-free number. At QuickBooks Payroll Support Number we work with the principle of consumer satisfaction and our effort is directed to give a transparent and customer delight experience. A timely resolution into the minimum span is the targets of QuickBooks Toll-Free Pro-Advisors. The diagnose and issue resolution process happens to be made detail by detail and is kept as simple as possible.

    ReplyDelete
  64. The error comes while you are in the middle of searching for something online and you see banking error 9999. The error can cause the system to hang, run slowly or even stop working. Also when the accounting professionals are trying to update the bank information, they can get entangled with this error. If you would like to learn How To Fix Quickbooks Error 9999, you can continue reading this blog.

    ReplyDelete
  65. Thanks for taking the time to discuss this, I feel strongly about Hadoop concept it make use in interviews and love learning more on this topic. If possible, as you gain expertise, would you mind updating your blog with extra information? It is extremely helpful for me.
    DevOps Training in Chennai

    DevOps Online Training in Chennai

    DevOps Training in Bangalore

    DevOps Training in Hyderabad

    DevOps Training in Coimbatore

    DevOps Training

    DevOps Online Training

    ReplyDelete
  66. Thanks for sharing this informative blog. I did SAS Certification in Greens Technology at Adyar. This is really useful for me to make a bright career..
    acte reviews

    acte velachery reviews

    acte tambaram reviews

    acte anna nagar reviews

    acte porur reviews

    acte omr reviews

    acte chennai reviews

    acte student reviews

    ReplyDelete
  67. Nice article i was really impressed by seeing this article, it was very interesting and it is very useful for me.This is incredible,I feel really happy to have seen your webpage.I gained many unknown information, the way you have clearly explained is really fantastic



    python training in bangalore

    python training in hyderabad

    python online training

    python training

    python flask training

    python flask online training

    python training in coimbatore


    ReplyDelete
  68. Unfortunately, learning to develop for Android is actually one of the trickier places to start. Building Android apps requires not only an understanding of Java (in itself a tough language), but also project structure, how the Android SDK works, XML, and more..keep it up!!



    Android Training in Chennai

    Android Online Training in Chennai

    Android Training in Bangalore

    Android Training in Hyderabad

    Android Training in Coimbatore

    Android Training

    Android Online Training

    ReplyDelete
  69. Hi,
    Very nice post,thank you for shring this article.
    keep updating...

    big data hadoop course

    Hadoop administration training

    ReplyDelete
  70. Besides the articles can be disclosed by using the Public Knowledge Base for Salesforce Knowledge application from the AppExchange.
    Salesforce training in Noida

    ReplyDelete