Software Engineer - Data Processing
SpazioDati works on applying Semantic Text Analysis and Big Data on massive amounts of Corporate Data to provide services - both B2B and B2C - of Sales Intelligence, Lead Generation, Data Cleansing, and more. It is currently used by many small and big companies in Italy and abroad and we plan to expand our services to new horizons soon.
Our Data Management Team
Our Data Management Team is responsible for collecting, cleaning and combining different data sources into some big databases, and to provide quick and efficient access to the data by means of REST APIs. This involves all the steps bottom to top, from data collection, matching, disambiguation, cleansing to data retrieval.
Your role as Software Engineer
We are looking for software developers that are ready to "get their hands dirty" for building a better world (of data, at least), one where information is clean and easily accessible; to achieve such goals is not easy, one should be strict but pragmatic, open-minded but skilled on troubles that are common when dealing with data. We are a multicultural team that loves collaboration, we trust and respect each other, we are "agile enough" and, hard to believe, we all love data. If you are an independent data hacker who wants to work in a young and positive environment we want to hear from you!
- Build and manage processes to collect, clean and store different data sources;
- work with Big Data technologies to join data sources and produce structured, cleaned information;
- handle live-updates, generate monitoring events and notifications about data updates;
- maintain APIs for accessing the data, and support other SpazioDati teams to get the most value out of it.
- You have a B.S. or M.S. Computer Science or related field, or equivalent experience;
- you are a quick-thinker, a problem-solver, who feels comfortable in writing code;
- you are self-organized and willing to work in a fast-paced environment;
- you take code quality seriously: continuous integration, testing, code review, linters.
- you have previous experience maintaning and developing data-intensive, mission critical workflows
- you have good experience with data storages, such as relational databases (PostgreSQL) and search engines (ElasticSearch);
- you feel at home with python and JVM-based languages (Java, Scala, Groovy) both for scripting and complex, object-oriented environments;
- you have experience with Big Data technologies (Apache Spark, Apache Kafka);
- you are good at understanding client needs and explaining how things work.
How to apply
Send your application at <firstname.lastname@example.org> please make sure to include:
- Your CV (any format is fine)
- Let us know why you'd like to take this position, what you expect and what you think you could bring to the team
- Please, please, please, include a github account, some code you've written, an open source project you contributed to, or at least a link to your work that you like (it doesn't matter if it's completely unrelated like a videogame, an art project or whatever else). Precedence will be given to application that meet this criteria.