complexity, data science, digitalization
With data becoming the key to any organization's progress, the necessity for improving its quality and consistency, too, has turned imperative. This necessity led to the genesis of Master Data, which encapsulates the core of business information. It includes structural and hierarchical references needed for a specific enterprise and is easily shareable. In the absence of Master Data, one cannot do any business in present times. If excelled at Master Data effectively, companies can see a direct impact on boosted sales, reduced risk management, and improved profitability.
transparency, analytics, data science
Given the break-neck pace at which the markets function, supply chain executives must think on their feet to stay ahead of the competition. Each day brings possibilities and factors, turning the decision-making job into an unenviable task. Making the correct decision requires an eclectic mix of experience and knowledge, the foundation of sound judgment.
performance gap, analytics, data science, S&OP
The ultimate goal of any business is to deliver its product and service to the customer on time and in the right quantity. Supply chain management plays a significant role in achieving this goal. The supply chain of any business is a complex and dynamic network of integrated processes, people, and technology to fulfill customer demand. There are different players within the supply chain of any products: suppliers, wholesalers, manufacturers, 3PL service providers, etc. Successful supply chain management contains the efficient management of each part in which the right strategic, tactical, and operational decisions are taken and implemented at the right time.
analytics, data science, digitalization, execution management
With the rise of the age of data abundance, it is natural for technology like data infrastructure, analytics, and AI platforms that process such data to appear. These new technologies enable to access, transform, and analyze the vast reservoirs of data available to us. However, analyzing and processing data provides us with insights in dashboards and more information involving the worst-case, and such information is useless – unless it can be put to good use.
supply chain, innovation, analytics, data science, execution management
With the advancement and widespread use of the Internet, there has been an astronomical increase in both the number of data sources and the amount of data generated from each source.
complexity, transparency, data science, VUCA
Increased levels of customization and particularization of every capability have made their way into all forms of business and production. The supply chain has not been able to stay out of this domain of reorganization, and as a result, supply chain segmentation is a hot topic.
supply chain, covid19, transparency, analytics, data science
A digital twin is a highly detailed digital replica of any system that uses comprehensive data to emulate the working of the system at all times. Therefore, a supply chain digital twin is a simulation model of a supply chain. You feed the model real-time data from all sources and systems of the organizations that can exactly work out the effects of macro and micro-changes on the system using advanced analytics and learning models.
In our ongoing series, the AIO Data Science teams publish tutorials that go along with AIO on GitHub. Today, aioneer Maryam introduces read_and_write. This function is necessary because often, CSV files provided by clients contain bad_lines. These are lines with too many fields, like, for instance, too many commas. As a result, these CSV files cannot be read by Qlick, so they need to be cleaned. Doing this one by one would be very time-consuming. Secondly, sometimes it is necessary to combine data files if we receive data in several files or sheets. These need to be concatenated. It would be great to be able to do this in a simple function. In the tutorial, Maryam shows you how to do this!
At aioneers, we create a lot of dashboards and do lots of machine learning tasks. Some of those analyses are done regularly. To deliver these dashboards and machine learning, we need to do a lot of data transformation, like cleaning data and joining tables. To automate data transformation, we use Databricks clusters to run Python and, sometimes, R scripts. Databricks allows us to automatically run data transformation scripts on schedule, but it is missing one essential feature: email notification on how the data was loaded.
In today's tutorial, Sebastian shows you how to use vault_get_secret, the function we have released on our GitHub page last week. vault_get_secret helps you extract keys from the Azure key vault.
The key vault is based on three pillars: key management, secret management and certificate management. We mainly use it for the secret management, where we store tokens, passwords, API keys and so on. Since we need to access these, we wrote a function called vault_get_secret that helps us no longer store the secrets in our code, but rather access them through the function. This is a more secure way of working, and we have to worry less about who has access to the code.