Overview

Join SADA as a Sr. Data Engineer!

Your Mission

As a Sr. Data Engineer at SADA, you will work collaboratively with architects and other engineers to recommend, prototype, build and debug data infrastructures on Google Cloud Platform (GCP). You will have an opportunity to work on real-world data problems facing our customers today. Engagements vary from being purely consultative to requiring heavy hands-on work and cover a diverse array of domain areas, such as data migrations, data archival and disaster recovery, and big data analytics solutions requiring batch or streaming data pipelines, data lakes and data warehouses.

You will be expected to run point on whole projects, end-to-end, and to mentor less experienced Data Engineers. You will be recognized as an expert within the team and will build a reputation with Google and our customers. You will demonstrate repeated delivery of project architectures and critical components that other engineers demur to you for lack of expertise. You will also participate in early-stage opportunity qualification calls, as well as lead client-facing technical discussions for established projects.

Pathway to Success

#BeOneStepAhead: At SADA we are in the business of change. We are focused on leading-edge technology that is ever-evolving. We embrace change enthusiastically and encourage agility. This means that not only do our engineers know that change is inevitable, but they embrace this change to continuously expand their skills, preparing for future customer needs.

Your success starts by positively impacting the direction of a fast-growing practice with vision and passion. You will be measured quarterly by the breadth, magnitude, and quality of your contributions, your ability to estimate accurately, customer feedback at the close of projects, how well you collaborate with your peers, and the consultative polish you bring to customer interactions.

As you continue to execute successfully, we will build a customized development plan together that leads you through the engineering or management growth tracks.

Expectations

Required Travel – 10% travel to customer sites, conferences, and other related events. Due to the COVID-19 pandemic, travel has been temporarily restricted.
Customer Facing – You will interact with customers on a regular basis, sometimes daily, other times weekly/bi-weekly. Common touchpoints occur when qualifying potential opportunities, at project kickoff, throughout the engagement as progress is communicated, and at project close. You can expect to interact with a range of customer stakeholders, including engineers, technical project managers, and executives.
Training – Ongoing with first-week orientation at HQ followed by a 90-day onboarding schedule. Details of the timeline can be shared.

Job Requirements

Required Credentials:
– Google Professional Data Engineer Certified or able to complete within the first 45 days of employment

Required Qualifications:
– Mastery in at least one of the following domain areas:
1. Data warehouse modernization: building complete data warehouse solutions, including technical architectures, star/snowflake schema designs, infrastructure components, ETL/ELT pipelines, and reporting/analytic tools. Must have hands-on experience working with batch or streaming data processing software (such as Beam, Airflow, Hadoop, Spark, Hive).
2. Data migration: migrating data stores to reliable and scalable cloud-based stores, including strategies for near zero-downtime.
3. Backup, restore & disaster recovery: building production-grade data backup and restore, and disaster recovery solutions. Up to petabytes in scale.
– Experience writing software in one or more languages such as Python, Java, Scala, or Go
– Experience building production-grade data solutions (relational and NoSQL)
– Experience with systems monitoring/alerting, capacity planning and performance tuning
– Experience in technical consulting or customer-facing role

Useful Qualifications:
– Experience working with Google Cloud data products (CloudSQL, Spanner, Cloud Storage, Pub/Sub, Dataflow, Dataproc, Bigtable, BigQuery, Dataprep, Composer, etc)
– Experience with IoT architectures and building real-time data streaming pipelines
– Experience operationalizing machine learning models on large datasets
– Demonstrated leadership and self-direction — a willingness to teach others and learn new techniques
– Demonstrated skills in selecting the right statistical tools given a data analysis problem