Excited about the recent announcement of https://github.com/rstudio/reticulate, which allows R and Python to be used together, and especially about how that can enable Shiny dashboards to be created with Python. Tutorial at https://rviews.rstudio.com/2018/04/17/reticulated-shiny/
AI is fascinating, but most organizations don’t have the magnitude of data or the skills to effectively leverage it.
Watching Qlik’s Qonnections 2018 keynote yesterday opened my eyes to a new application of ai. Qlik claimed that Qonnections was the largest data analytics conference in the world. Not sure about that one, but their existing 45,000 customers is hard to argue. Their CEO make some interesting claims, that 10 of the 10 largest pharma companies use Qlik, along with an impressive list of the top 10 companies across several leading industries.
The progression is always like this: data -> analytics -> insights. I’ve heard it many ways from many suppliers, but the two biggest hurdles with data are always the same. Gathering high-quality data is difficult. There are lots of tools for display once clean data is available. Regardless of the quality of the tools, deriving actionable insights is tricky.
It all starts with data literacy and a data-centric culture.
Qlik’s director of research presented their soon to be released “Cognitive Engine”. The highlight of the cognitive engine is their clever use of AI. With Qlik’s new technology, a researcher can visually drill into apparent anomalies in the data, and Cognitive Engine’s AI will proactively identify statistically significant variances worth exploring.
This, to me, was fascinating. It was clearly one of the quickest and easiest ways a data-centric organization could start to leverage AI.
Amazon’s webinar today highlighted several ways that Big Data is being used in the HCLS industry. One of the key insights was the self-realization within one of the featured medical practices that they were “data rich, insight poor.” Having tremendous amounts of data isn’t much value to anyone without the appropriate tools and skills to visualize, interact and explore the data, which of course needs to be coupled with people who are empowered to act on the insights revealed.
Several of the tools demonstrated seemed to use Shiny and R to create interactive graphics. This is a perfect combination of developer skills preparing the right kind of visualizations so that self-service data exploration becomes meaningful. So many self-service reporting tools end up being huge time sinks because the fundamental data organization was incomplete or inconsistent.
One of the featured labs highlighted how expenses were reduced and customer satisfaction increased by the appropriate use of dashboards and educated data exploration. They were able to reduce repetitive testing and quickly identify trends in unreimbursed testing. Then with data at their fingertips they were able to discuss the trade-offs between economic and medical goals.
Some technologies to investigate further:
I’m headed out to this conference in March. Will be great to revisit my old stomping ground. It would take about an hour but I used to walk to the convention center.
But the important part—hearing about the latest innovations in AI, Analytics and Cryptocurrency and meeting industry influencers.
From the conference web page…
The world’s biggest and most important GPU developer conference. Learn how to harness the latest GPU technology. Enjoy face-to-face interactions with industry luminaries and NVIDIA experts.
Started some online work with datacamp this week. The first software I developed was in FORTRAN. Can you believe it? My professor told me to a Mechanical Engineer all software is FORTRAN. I moved to C. Those professors didn’t know what they were missing. c++ quickly became my happy place. And, from my perspective, most modern languages derive so much from that early work.
I can program PHP with my eyes closed. Try it and you’ll see just how tricky that is. Matlab and Mathematica were used frequently in grad school—but it’s been a while. Python seems to be the language of choice for quick data set investigation and integration with AI tools.
Python list comprehensions are cool! That’s one slick line of code to adjust an entire dataset.
phrase = 'List comprehensions are more interesting with numbers'.split() result = [[w.lower(), len(w)] for w in phrase] print(result)
Let the journey begin!
From the DataCamp website…
Learn Data Science from the comfort of your browser, at your own pace with DataCamp’s video tutorials & coding challenges on R, Python, Statistics & more.