Overview

Who you are:

We are always looking for amazing talent who can contribute to our growth and deliver results! Geotab is seeking a Senior Data Platform Developer who will devise new methodologies and infrastructure for reconstructing and enriching data at scale, while supporting internal users who leverage the big data environment to upload additional data sets. If you love technology, and are keen to join an industry leader — we would love to hear from you!

What you’ll do:

As a Senior Data Platform Developer your key area of responsibility will be participating in design, implementation and maintenance of our new video ETL infrastructure on Google Cloud Platform, and ensure data is continuously and promptly available for analysis by the data science team. You will need to work closely with team leads and other developers to design and implement the Big Data infrastructures on cloud at Geotab. To be successful in this role you will be a self-starter with strong written and verbal communication skills, and have the ability to quickly understand complex, technical concepts.

How you’ll make an impact

  • Design, implement, and maintain new data infrastructure platforms managing the data ingestion, digestion, and stream processing of Geotab’s internal data lake.
  • Design, implement, and maintain logging, monitoring, and alerting services to ensure the health of Geotab’s big data infrastructure.
  • Perform quality assurance analysis by performing peer review, ensuring coding standards are followed, looking for performance improvement, and bug fixes.
  • Analyze call stacks, trace files, and performance data to troubleshoot and identify root cause of bugs.
  • Assist with the development and maintenance of data engineering guidelines, policies, standards and processes.
  • Mentor other team members toward technical expertise.
  • Mentor other team members toward technical expertise.
  • Participate in a 24×7 on-call rotating schedule (if applicable).

What you’ll bring to this role:

  • Post-secondary Degree specialization in Computer Science, Software or Computer Engineering or a related field.
  • 5+ years experience in Data Engineering or a similar role.
  • 5+ years experience in developing production-level systems using Java and Spring frameworks.
  • 3+ years experience in designing, building and maintaining production-level application containerization, such as Docker, Kubernetes, or OpenShift..
  • Knowledge of Apache Kafka, Apache Flink, Apache Ignite, Apache Airflow, Apache SuperSet, Apache Olingo, and DataHub is a big plus.
  • Knowledge of gRPC, Protobuf, Apache Avro, Apache Beam is a plus.
  • Knowledge of data management fundamentals and data storage principles.
  • Knowledge of batch and streaming data architectures.
  • Experience with API design and implementation.
  • Familiar with Big Data environments (e.g. Google BigQuery).