Data engineer with a sniff of DevOps

Published on: January 26, 2021

About Kapernikov

Kapernikov is a consultancy company specialized in data for industry. We apply our data skills, ranging from MDM, ETL to data science and computer vision to industrial and asset management projects.

Kapernikov has a unique approach: consultants are trained well to play multiple roles at our clients (developer-analyst-scientist-engineer-project manager) as to deliver fast and agile. Communication lines are short and support from colleagues is always near. Work-life balance is not a mere buzzword but the way of life at Kapernikov. The company is managed as a sociocracy where everyone has a say. Lastly, we promote green transportation as much as possible, and try to avoid cars.

Context

At Kapernikov, we help our customers manage their data better. By our customers, we mean companies that build and manage large infrastructure and complex manufacturing companies.

When it comes to managing databases, we are a tower of strength. We deploy data cleaning campaigns in order to bring the corporate databases up to date again. We establish data governance to define the golden master and provide guidelines and rules. We help our clients build new applications by providing them with high-quality data. We organize master data management, and we manage the ETL process like a boss. We use top-notch visualization and machine learning techniques, but we’re not averse to low-tech post-it meetings either. In short, data management at Kapernikov is fun, but also serious business.

Have we whetted your appetite yet with the above-mentioned description? Are you attracted to beautiful models? Do you find data sexy (just like we do)? Then, without a doubt, you are the data scientist we are looking for.

For future projects we are looking for someone to extract, transform and load data from various sources containing master and transactional data into a target system. Where data doesn’t conform to what the target system is expecting, we implement cleansing rules and procedures in order to solve these issues. We work mainly with open-source frameworks for the transformation, but are able to use commercial software where and when needed.

We use DevOps to organize our development cycles. We want you to not only adopt this methodology, but deepen your knowledge and take initiatives to get organized even better.

Your responsibility:

  • You extract, convert and feed data into a target system, merging multiple sources where needed. You are not discouraged when things get difficult, but you take initiative in an inventive and systematic way.
  • You do data engineering in a modern way that raises the bar for us and our customers. This means investigating and getting proficient in newer concepts like functional data engineering, (open source) big data technology and streaming ETL.
  • You gather and analyze our customers’ requests and the quality of their data, and you provide feedback about the feasibility of their requirements.
  • You visualize data, so our customers obtain insight into their data quality.
  • You provide guidance to the customer on how data could be better or more relevant. You can do some automatic data cleansing yourself.
  • You manage your project in an agile way, close to the customer, so you come to results quickly.
  • You are not afraid to troubleshoot various issues by yourself, and you feel at ease while working in a shell environment.
  • You assist our team of data managers and data cleansers with your knowledge and scripting skills.

This is the guy/gal we are looking for:

  • You have a masters degree in informatics/engineering (or equivalent experience). Or you are a bachelor with a first working experience. Even better: you can convince us with your code on GitHub.
  • You know your database stuff. SQL has no secrets for you. Pandas, Postgres and Oracle ring more than a bell. And you can surprise us with a lot more of that database-gobbledygook.
  • You are a data geek. For you, databases are more than just records and tables. You understand that you are working with valuable information that people rely on every day.
  • You know how to extract and transform data from different sources in order to combine them and load them into another system. Work/dataflow engines don’t have any secrets for you. You worship the advantages of code and versioning.
  • You have worked in a team that uses DevOps and are interested in this methodology.
  • You strive to understand the business and how it operates, so that you can help them solve their data problems.
  • You are communicative, punctual and organized.
  • You know how to work with engineers, business users and managers. You know how to explain them the ins and outs of their data and understand what they want to see.
  • You speak French or Dutch fluently, and you can express yourself in the other national language.
  • You are prepared to work your way into our customers’ plans and database systems. A first experience with a large infrastructure operator is a plus.
  • You can fully engage yourself in a project. Change or unexpected tasks are a challenge for you.
  • You will commit yourself to Kapernikov on a full-time basis.

What you can expect from us

  • A job in a pleasant working environment. The Kapernikov HQ are close to Brussels Midi station and are a wonderful, informal working environment. However, for this type of assignment, we often work at the customers site, which could be anywhere in Belgium (albeit most often around Brussels).
  • A competitive salary and many fringe benefits.
  • Education and training to perform your job well.
  • Room for initiative and the necessary feedback that will make you a better consultant.
  • A vibrant atmosphere with room for new ideas, experimentation and cross-fertilization with other Kapernikov consultants.
  • Kapernikov is a self-organized company with sociocracy as philosophy, meaning decisions are taken and implemented by everyone as equals. Our employees come first.

Getting excited already? 
Let’s get in touch.