Team International is a global Tech consulting company and software development service provider. With more than 20 years of experience, global operations in Poland, the United States, Portugal, and Latin America, and over 1500 employees, we combine technology knowledge, an agile approach to addressing various business challenges, and business information, all while maintaining a customer-centric perspective. Seeking an experienced mid-to-senior Data Engineer. We expect the person to slot into a functioning team and can bring a good work capacity and ownership qualities to his/her work items. We are not necessarily looking for someone who has a strong personality who can come up with fresh ideas to build new things; however, if we find someone like that, it's always good. For typical responsibilities, the person can expect the following: · Leverage Python skills for Data Engineering purposes · Build and maintain ETL/ELT pipelines using tools such as Azure Data Factory. · Work closely with technical and non-technical client stakeholders to build and maintain pipelines to extract data from various source systems and organize raw data within the existing Data Warehouse structures. · Build and maintain data pipelines to engineer features for Arrive's ML systems · Collaborate with other team members of the Data Engineering team to review code and provide feedback on new data-related systems and design patterns. · Contribute to technical and functional documentation on Data Engineering-owned systems and standards. · Build data quality frameworks to enhance the observability and reliability of Data Engineering-owned systems and pipelines. · Collaborate with other analysts and business stakeholders to improve income data streams for the source data quality as well as outgoing data packages that are fed into ML tools, BI tools, etc., and own the end-to-end value proposition of the work items. · Configure secure integrations between Engineering and Analytics systems needing to interact with the Data Platform. · Proactively participate in an on-call rotation to respond to and debug data pipeline issues. · Strong SQL knowledge to write complex queries across different sources and use SQL best practices for designing operational practices. · Leverage strong Snowflake, DBT, and Airflow experience. · Proven experience as a Data Engineer (mid-to-senior level). · Strong proficiency in Python for Data Engineering purposes. · Solid experience with ETL/ELT pipelines, preferably using Azure Data Factory. · Strong SQL skills, including the ability to write complex queries across different sources and apply best practices for operational efficiency. · Hands-on experience with Snowflake, DBT, and Airflow. · Experience collaborating with technical and non-technical stakeholders. · Knowledge of data quality frameworks to ensure observability and reliability of pipelines. · Ability to document technical and functional aspects of data systems and standards. Nice to Have: · Experience working with ML systems or preparing data features for machine learning pipelines. · Familiarity with Data Warehouse design and best practices. · Understanding of BI tools and their data requirements. · Experience improving source data quality and output data packages for analytics and AI/ML tools. · Ability to contribute to and improve design patterns for data-related systems. · Flexible engagement models. · Work alongside top IT global talent. · Full compliance with security and regulatory standards. · Customized IT and software development solutions. · A supportive and collaborative work environment.