Description
Join the (R)evolution
We’re not a logistics company with technology working for the fashion market. We’re a Tech Fashion Startup shaping the Supply Chain of the future. Our four founders and an early-adopter brand started this path in 2015 and we have rapidly grown into a global business, delivering the dream beauty of fashion in over 1M pieces to 123 countries worldwide. To keep on growing we need the right people to engage with the apparel and to build and develop great software. Spoke is our tech product that brings full visibility, flexibility and control to the fashion universe like never before enabling hundreds of brands to globally scale their business.
What’s the job HUUBout?
We are not just developing software, we are embedding artificial intelligence processes into our software development cycles. In this position you’re able to work on the core of our business, entering and developing a tech ecosystem with an architecture built to aggregate an end-to-end and omnichannel solution that, at the same time, is abstract enough to integrate with other global parties, granting the foundations of all business fundamentals.
What you’ll do:
You will be integrated into the Data Engineering team, being responsible for helping maintain and improve the data architecture and tools.
You will:
- Design and build scalable & reliable data pipelines (ETLs) for our data platform
- Constantly evolve data models & schema design of our Data Warehouse to support standard and self-service business needs
- Work cross-functionally with various teams, creating solutions that deal with large volumes of data
- Work with the team to set and maintain standards and development practice
Who you are:
- You have 2+ years of experience in:
- Building and maintaining data pipelines in a custom or commercial ETL tool (Talend, Pentaho Kettle, SSIS, etc)
- Experience with relational SQL (T-SQL, PostgreSQL, MySQL, etc) and NoSQL (MongoDB, CounchDB) databases
- A Data Warehouse environment with varied forms of source data structures (RDS, NoSQL, REST API, etc)
- Good experience in creating and evolving dimensional data models & schema designs to improve the accessibility of data and provide intuitive analytics
- Skilled in Python
- Experience in working with a BI reporting tool (PowerBI, Tableau, etc)
- Fluent in English, both written and spoken
- You have good analytical and problem-solving skills, the ability to work in a fast-moving operational environment and you are enthusiastic and with a positive attitude
- Experience/Certification in Google Cloud Platform (BigQuery, Dataflow, etc.) is a big plus
- Being very familiar with Data Streaming (Kafka) is a plus
- You are familiar with continuous delivery principles such as version control with git – unit and/or automated tests is a plus
Welcome to our world:
- Fast-growing global company
- Young, ambitious and innovative environment empowering personal and professional development
- Project-basis mindset with a multidisciplinary approach towards success
- Agreements on flexible scheduling
- Autonomy and responsibility for your work
- Strong team culture and spirit
- Multiple opportunities with great career prospects
- We are committed with equality of opportunities regardless of age, gender, sexual orientation, race, religion, belief or any other personal, social or cultural background and existence.
If you believe this is the right opportunity for you, please submit your application here.