This focus paper specifically deals with discrimination, a fundamental rights area particularly affected by technological developments. When algorithms are used for decision making, there is potential for discrimination against individuals. The principle of non-discrimination, as enshrined in Article 21 of the Charter of Fundamental Rights of the European Union (EU), needs to be taken into account when applying algorithms to everyday life. This paper explains how such discrimination can occur, suggesting possible solutions. It examines big data and fundamental rights implications, data-supported decision making in the form of predictions, algorithms and machine learning, as well as how computers "learn to discriminate." In addition, the report outlines ways to detect and avoid discrimination such as auditing and repairing algorithms, the (un)availability of information on protected characteristics, and feedback loops and more efficient decision making.