Infrastructure Engineers set up and manage our end-to-end cloud analytics platform that was designed to manage and automate data ingestion, compute resources, data set creation, advanced data science, and accelerated data interaction—allowing dramatically increased speed to insights.
Essential Duties & Responsibilities:
- Utilize basic knowledge in the areas of infrastructure, networking, operating systems, distributed systems, security, and data movement technologies (such as NFS, Rsync, SFTP/SCP) to address specific operational components or deployments within a given platform.
- Investigate, design, and build platform components around a new idea, use case, or active need.
Basic Knowledge Requirements:
- Virtualization methodology and strategy – Why and when to virtualize. VMware experience is a plus.
- Containerization methodology and strategy – Basic understanding of, and ability to verbalize, concepts with respect to containerization. Ability to discuss tools/platforms with respect to containerization and/or Docker experience is a plus.
- Enterprise networking concepts – Intermediate understanding of spine-leaf networking, VLANs, Next Gen Firewalling.
- Enterprise infrastructure concepts – Intermediate understanding of server selection criteria and hardware troubleshooting. Experience deploying hardware in a data center environment is a plus.
- Enterprise security concepts – Intermediate knowledge of defensive tools, common attack vectors, risk mitigation strategies. A personal interest or hobbyist is a plus.
- Intermediate knowledge of CentOS or RHEL linux distribution.
- Intermediate knowledge of Hadoop and standard packages. Hortonworks is a plus.
- Intermediate knowledge of NFS management and deployment.
- Intermediate knowledge of Identity Management concepts. RHEL IPA experience is a plus.
- Expert knowledge of data movement techniques. Experience in message-based data exchange is a plus.
- Basic programming skills (variables, objects, functions, loops, package usage).
- Basic scripting capabilities (ability to read and modify Python and node.js is required. Ability to develop in Python and node.js is a plus.)
- Proficient Linux shell manipulation (FS navigation, Shell scripting, OS error troubleshooting.)
- Ability to thinks and act methodically, and with structure, while troubleshooting.
- Ability to adapt, self-educate, and overcome challenges outside his/her area of expertise.
- Ability and motivation to learn and grow technically and professionally daily.
- Ability to consistently test actions and plans against intentions and optimization strategies.
- Ability to pre-identify failure scenarios in existing deployments as well as new designs.
- Communicates in a clear, calm, and timely manner with coworkers and clients.
- Demonstrates strong customer services skills, understanding that every person (internal and external) is a client.
- Ability to share new ideas and challenge the status quo thoughtfully and with respect.
- Basic error and issue troubleshooting within Hadoop (Log review, dfsadmin, etc.)
- Basic enterprise workspace management (WiFi, AntiVirus, Network Printing, Conferencing setup). Meraki, Duo, Portnox experience is a plus.
Location: South Bend, IN