Data Engineer

We are looking for a talented data engineer to join our team of experts.

Card image cap
What is this about?

Synteda AB assist companies to embrace the era of artificial intelligence and Computer Vision effectively and take advantage of its vast possibilities by delivering innovative solutions. We understand how Computer Vision and AI can benefit the society. We stand ready to use our expertise to facilitate businesses in their journey into the era of AI. We work with several clients with various industry fields in close collaboration with top universities in Sweden.

Synteda is actively working on several unique products and solutions in various fields. Dream-innovate-create is our work process were our ideas go through different stages. We create successful solutions by checking feasibility of our product idea (POC) and develop our solution working with agile methodologies until we reach a complete full-scale product.

Synteda has a core team of highly qualified experts that provide assessment to identify challenges and explore different ideas to enhance effectiveness and develop specific algorithms to overcome company challenges. Synteda is built on ethical awareness, curiosity, transparency, respect, participation and last but not least science.

What are we looking for?

Responsibilities
• Work on our various platforms and customer assignments.
• Develop, code and implement data solutions according to requirements.
• Maintain data sources, pipelines and models.
• Develop Data Quality checks for source and target data sets.
• Development and maintenance of ETL/ELT processes.
• Enhance existing applications and platforms.
• Create project specifications and requirements.
• Develop and maintain reports.
• Adopt best practices and share them across the team.

Skills
• Good knowledge and experience of Python.
• Good knowledge with ETL, PowerApps, Databricks, Docker, Delta Lake, Azure Data factory.
• Experience with data visualization tools such as Power BI, Tableau, Microsoft Access, SPSS modeler, SAS or similar.
• Strong experience with databases like SQL or similar.
• Expertise in Hadoop, PySpark, Hive or similar.
• Good knowledge of cloud such as Google Cloud, AWS and Azure DevOps for deployment and management of developed models.
• Experience in Data modelling.
• Experience in Data warehousing.
• Experience creating ETL/ELT processes.
• Experience creating reports.
• Experience of agile development methodologies.
• Fluent in English, both spoken and written is required.

Links to your CV and/or portfolio, for example.
Write a little something about yourself and why you are interested in this position.