There is certainly no lack of available data nowadays. It is a science as well as an art to determine what part of the data actually contains information we are interested in and can use. To that end, data has to be acquired first, which may already present its challenges and requires thought and planning. The data may need to be filtered, normalised and transformed before it is stored. The storage format and access rights have to be identified and finally the data is used in algorithms and visualised, either of which has to be defined with care in advance. Of course, all of these steps need to be done with security in mind, so that the data is verifiable and tamper-proof. This is the important step because if the data you have is no good, i.e. is garbage, the resulting analyses will be garbage as well. Remember also that this is not done once but has to be repeated several times, since your data is likely not static but is being added to, updated and changed at certain pre-defined intervals or on certain triggers.
Being multilingual is a big advantage, as much of the information is not, or not fully, available in other than the local language(s). It also gives you an advantage in terms of knowing the cultural and country context of data, which can sometimes help you identify bogus data.
Once a framework for the previous step has been defined an implemented the analysis of the data can begin. This can be simple adding of numbers, or depicting points on a map, or can be complex statistical algorithms that result in an (real-time) response of a system, or (and) in a graphical depiction of the data in a chart or on a map. This is usually the preferred way for humans to consume data, computers don't need that.
We are handling all these stages, but have our core expertise in stages one and two. Especially in data acquisition, transformation, and programming of algorithms that involve big data and machine learning. We also deal with web frontends a lot, so we have some expertise there to.
In terms of tools and software environments our expertise is mostly in the open source world. More specifically, we used Linux/FreeBSD, Python, R and Erlang/Elixir frameworks and programming languages. Of course, some of these tools run on Windows too, and we have experience with C# and Scala but don't use them often.
Some of our work and show-cases are still online: earthquake data (currently paused), solar and spaceweather data (regular updates), planetary and spaceobjects data (regular updates) and conflict maps (soon other maps).