MS Excel as part of data processing
When data processing is concerned mostly we talk about processes that start with data coming from a database of an IT system and ending at another well-defined structured data storage (even data reporting and visualization systems do work with their own databases). These processes can be automated and processed using tools that are affordable for players with heavy IT support and large enough data sets. Many tasks related to data do not fall under that ideal category. The bitter reality is:
- a lot of data is created in MS Excel at companies where there are no designated IT systems;
- MS Excel is still (one of) the main exchange format between IT systems, departments or organisations;
- a lot of tasks are ad-hoc in the sense that it is not using enough data or defined up to the point where the tools mentioned above could be used;
- the target of data processing is some ad-hoc reporting or a special case that is not worth to be included in the above data processing facility;
- the above systems require learning from the users and system administration from IT.
Because of these reasons MS Excel is the de facto reality at a big part of data processing. And DJEENI is the platform that allows process automation in MS Excel.
MS Excel does not mean that you should give up data automation. The natural choice is to write Excel macros to automate tasks (or even easier, record macros, that turns soon out to be unuseable in most cases). There are some problems with custom developed macros, though:
- It is relatively costly to develop custom macros for each and every task.
- Macros are rarely documented therefore users do not know the exact conditions they should satisfy for and the exact results they can expect from the solution.
- Custom macros are almost never auditable.
- They are bound to the developer. Once the developer is not available anymore the macro is not maintained and the users should fall back to manual processing.
- There can be many custom macros developed. The sheer number of them makes it very hard to understand what is happening with the data.
Even if MS Excel is the data platform of choice for certain data processing the processes must be better organised and controlled than through custom macros.
Traditionally, IT systems are administered and maintained by the IT organisation. Complex IT architecture is built from the IT systems. End-users have a passive role: they can request a change but are not allowed to make decisions without getting a nod from IT.
As data becomes more and more important this setup causes a new headache: data belongs contentwise to the business but technically still part of the IT systems controlled by IT. At most companies, the solution for this problem is to get data out of the IT systems (‘Can I get it in Excel?’) to let data owners and end-users work with data.
Data owners need the data because business becomes agile. Life-cycle of organisations, their business and their corresponding data needs is dramatically shortened. There is no time to wait even weeks for a change in a report which is the reality in the current BI setup with IT-owned systems and external BI developers.
Instead of further centralisation of data handling (through further IT architecture building) an at least partially decentralized approach should be used to satisfy the agile needs of business and allowing the data owners to really own their data and data processes.
More about process automation in MS Excel and DJEENI