The cost of software faults has increased from 59 billion USD in 2002 to 1.7 trillion USD in 2017. To alleviate this cost, the consensus among software engineers is to test as early and as often as possible. This, however, is not adopted by many software development teams. Most often, there are limited resources available for testing compared to the development of a product. Therefore, new techniques and methods are needed to improve testing quality in practice. Currently, most software companies rely on simple coverage metrics to assess the quality of their tests. Yet, the academic literature proposes the use of mutation testing to assess and improve the quality of software tests. Despite the promising results of mutation testing, it is not yet widely adopted in industry. We attribute this to three main problems: the performance overhead, lack of domain knowledge in tool providers, and lack of tool support. In this thesis, we address these three problems. Our results show that it is feasible to adapt the process of mutation testing based on industrial needs.