Our team members have spent multiple years building solutions for leading ad tech companies. We have gained the knowledge to build best-in-class technology and become a global ad tech organization with the most automated, scalable and easy-to-use solutions for all our partners.
We operate hundreds of servers around the world to achieve highest availability and lowest response times for our clients. We locate them, depending on the purpose, in the cloud e.g. for tracking and ad delivery, or in data centers on bare-metal servers, e.g. for computationally-expensive machine learning tasks. On peak times we scale up our infrastructure dynamically let more server instances spin up as needed.
Advertising needs highly scalable and globally distributed infrastructure and services. From our experience we make strong use of stateless services that run in a highly scalable container environment based on Docker Swarm. Our Infrastructure and DevOps teams are specialists in their field and work consistently to keep the systems up and running.
As as data driven company, we early on decided to consolidate all data in a central place, our data platform. Our data pipelines process terabytes of data every day. The platform provides various consistent views on the data fitting the specific purpose. While data for authentication and authorization might reside in a standard relational database, other other systems access subsets of the data in Aerospike or Redis for low latency access ordruid for pre-aggregated statistics.
From the beginning, we made sure not to create a system that was monolithic or isolated. We chose to build dedicated services, have clear data ownership, and to pass data along in message queues and use APIs to connect our system internally or to build front-ends on top. To the outside world, we connect using a variety of APIs, e.g. to automate offer-import or cost-import and to connect to invoicing or bookkeeping tools, as well as third-party data providers.
At WeQ we have multiple small development teams who work in an agile environment. We live this by trying to deliver meaningful increments with every sprint and validate the value of each iteration with the stakeholders. We use Scrum or Kanban and continuously improve our products, processes, and tools.
Our first systems started with simple statistics based on historical data to help with optimization. Today we have a team of ML engineers and ML scientists working together on dedicated systems. We prototype algorithms using frameworks and modules such as scikit-learn, statsmodels or TensorFlow and evaluate them offline before we test promising candidates on our production systems. To speed things up, the final algorithms may use C or Cython in addition to Python where needed. Our ML team drives various projects, from conversion and LTV optimization to identifying fraudulent behaviour in real-time.