(Enter skills, job title, etc.)

Big Data Administrator

Job Title:

Big Data Administrator

Location:

Bowie, MD

Industry:

Information Technology

Duration:

6+ months

Job Description:

Looking for Senior Big Data Administrator in Bowie, MD. In this role, the Big Data admin is responsible for providing 24/7 operations support for Inovalon mission critical Hadoop, MongoDB and Greenplum Databases.

 

Responsibilities: 

  • Provided 24x7 operation support for large scale Hadoop and MongoDB clusters across production, UAT and development environments;
  • Serve in an on-call rotation as an escalation contact for critical production issues;
  • Experienced in building or administrating Big Data clusters with HDFS, Kafka, Zookeeper, Hive, Yarn, Hue, Oozie, etc.;
  • Knowledge or experience in supporting NoSQL Databases like HBase, Hive and MongoDB;
  • Manage replication links between clusters to maintain high availability;
  • Configure and Monitor MongoDB instances and replica sets;
  • Analyzing and debugging slow-running development, performance, and production jobs;
  • Ensure all databases are backed up to meet the business’s Recovery Point Objectives (RPO);
  • Monitor Ambari and Ops Manaher dashboard and troubleshoot and resolve hadoop, MongoDB issues;
  • Work on documenting database environments and standard operating procedures;
  • Ensure that all big data components has latest patches and correct versions of supporting tools;
  • Working with the vendor(s) and user communities to research and test new technologies to improve the technical capabilities of existing Hadoop clusters;
  • Execute capacity planning and monitoring database growth;
  • Assist development teams with big data related topics; and
  • Build domain expertise and cross-train team members.

 

Job Requirements:

 Qualifications & Technical Skills:

  •  8-10 years of IT experience supporing any relational database databases preferabilly MS SQL;
  • 3-5 years’ experience installing and configuring Hortonworks Hadoop Clusters that includes a combination of the following: Backup and recovery of HDFS File Systems (distributed filesystem java based);
  • MySQL databases used by Cluster Configuring and maintenance of HDFS, YARN Resource Manager, MapReduce, Hive, HBASE, Kafka, or Spark;
  • String understanding and experience of ODBC/JDBC with various clients like Tableau, Microstrategy, and server components; and
  • Monitoring and tune cluster component performance.

 

Tip of the Week


Make sure your LinkedIn profile is identical or similar to your resume. Consistency is key!

 

View Starpoint's Top Tips.

Send Us Your Resume


Let Starpoint's expert recruiters help you land your next job.

 

Submit Your Resume

@Starpoint_Jobs