Article posted on : link to source

The move to the cloud is continuing to accelerate and most organizations I deal with are at minimum incorporating cloud platforms and processing into their architectures … if not pressing to move largely to the cloud. While there are many advantages to the cloud, it is also necessary to use caution to make sure that the risks of the cloud are mitigated while pursuing the advantages. One approach that can make a migration to the cloud quite costly is to transfer analytic code and processes as-is to the cloud instead of greatly increasing focus on efficiency.

Efficiency? Our Code Is “Efficient Enough”!

In a classic on-premise environment, analytics and data science teams aren’t known for the efficiency of their processes. In reality, processing was effectively “free” because the equipment was on the floor and ready to be used. In fact, analytical processes were often run at off-peak times and so made use of what would have been otherwise idle capacity. This was a win for all.

Traditionally, the primary concern when it came to analytics efficiency was that a process was “efficient enough” to meet two relatively low bars:

The process would finish within the timeframe needed

The process wasn’t so inefficient that it caused …

Read More on Datafloq

%d bloggers like this: