Building Understanding Out of Incomplete and Biased Datasets using Machine Learning and Databricks

Download Slides

At Atlassian, product analytics exists to help our teams build better products by capturing and describing in-product behaviour. Within our on-premise products, only a subset of customers choose to send us anonymised event data, meaning we have an incomplete and biased dataset. In this world, something as simple as ‘what percentage of customers use feature X’ then becomes a non-trivial estimation task. This world becomes further complex when a metric is subadditive, such as estimating distinct users of a product feature, where one user using the feature on multiple (and possibly unknown) instances should be counted as only one user and our methodology needs to account for this. In this talk, we’ll dive into our estimation methods and adjustments we make for various metrics, providing an accessible guide to operating in this environment. We’ll also discuss how we democratixed these estimation methods, allowing any stakeholder who can write a query to immediately access our models and create accurate and consistent estimates.

Watch more Spark + AI sessions here
or
Try Databricks for free

Video Transcript

– Hi, everyone. Welcome to the virtual Spark+AI Summit 2020. I’m Mike Dias and along with me is Luke Heinrich. I work as a Data Engineer, and Luke works as a Data Analyst. We both work at Atlassian, and today we going to talk about how we build understanding out of incomplete and biased datasets using Machine Learning and Databricks. But first I’m going to introduce you to Atlassian as a company and the landscape that we work with.

At Atlassian we believe that behind every great human achievement there is a team. From medicine and space travel, to data response and pizza deliveries, all products have teams all over the planet to advance humanity through the power of software. Our mission is to help to unleash the potential of every team. To achieve this mission, we build work collaboration tools to help our customers to achieve their potential and do great things. Some of our most popular products are Jira, Confluence, Bitbucket, Trello, Statuspage, and Opsgenie.

We give our customers the deployment options that are more suitable to them. For some customers that are ready for our cloud offering, where we manage everything for them, we have the cloud offering. But for those that are not ready yet we also have the on-premise solution where the customers can install our products in their own environment. In both cases, we have to make sure that we are building great products and meeting our customer’s expectations and needs. And among several things, there’s one particular thing that’s very important to build great products, and that thing is called product analytics. Product analytics helps us understand how users navigate our products and engage with our features, all with the aim of building better products. So product analytics let us understand their users and find the hypothesis to create the best possible products and features.

And the way that we work, is that we instrumented our products, both cloud and on-premise to capture user actions that we can use to generate insight about the product usage.

Our inner cloud products, their analytics data flows through a real time at a stream pipeline. And in our on-premise products, we accumulate the analytics data and send it once a day. But it’s very important to say that we never capture anything sensitive like personal identifiable information or user generated content. Everything is secure and anonymized.

And in order to make the data accessible by data workers, we process it using Sparks, through their stream technology and in both stream and batch pipelines. That then which lead the data in the Atlassian Data Lake. We store this data as packet tables where Atlassians can access this data and correlate with other data services to generate insights to help our teams to go and build better products. More broadly, at Atlassian, the Atlassian Data Lake is very HR-genius in terms of processing and ingestion. In addition to products analytics, we also have several other sources like internal databases and system integrations. The data can flow as a stream, as batch via AWS Kinesis, Kafka and other technologies as well. And we also have other file formats, like CSV and Word in ORC stored in the data lake. But no matter how the processing looks like, all the data ends landing on S3, and they are mapped a tables via Glue Metastore where all the Data Lake tables are registered and organized by namespaces.

These tables are accessible via SQL using Databricks and AWS chain. We are also very flexible in terms of visualization tools. So data analysts and data scientists can use their favorite tools to create their insights like Databricks Notebooks, Tableau, Redash, Amplitude and so on. And share their discovery with product managers that can use these information to make more data driven decision.

As I’ve been mentioning, everything data related at Atlassian is AWS based. We rely on Azure, Glue, Athena, and our Databricks account is also running on AWS. Our Data Lake is a petabyte scale, and it keeps growing at a rate of 100 terabytes per month.

We also have 2000 users, 2000 internal users of the Data Lake per month. And we have more than 200 data workers, which their primary job is to deal with data, like data engineers, data analysts, data scientists, and so on.

But let’s back to product analytics. Although we have a lot of data, we don’t have all the data all the time. Consider this case, in our cloud option, we have the complete products analytics data set. So we can fully correlate with other data sets, like license, for example. Having a complete data set gives us a lot of confidence about the inside the structures from this data set.

Having the complete analytics data set isn’t always the case, our product analysts are opt-in, and some of our own premise customers keep their products behind a firewall making impossible to share analytics data with us. So from our perspective, we know the customer purchases a product license, but we don’t know how they use the product. That is an incomplete and biased data set that leads to consequences that are going to be explained by my mate Luke Heinrich. Over to mate. – Awesome. Thanks, Mike. And hi, everybody. It’s great to be here. And I hope you’re staying safe in these crazy times. I wanna introduce you to the problems that come up with bias and incomplete data by running through a few examples. But hopefully, by the end of them, you’ll have an appreciation for why data, like the one we have here can become pretty messy when trying to do analytics in practice and analytics at scale.

What Fraction of Customer Use Feature X?

So let’s start off with a simple question. A product manager comes up to you and asks how many customers are using a feature that they’ve just shipped. This is the bread and butter of building understanding about product use and making data informed decisions.

Now for this example, I wanna simplify the problem by imagining that in reality, we only have two customers groups. Group A, on the left of which 10% of customers use the feature, and Group B, on the right where 90% use the feature. That’s a very big swing between groups. And let’s say that each group comprises half of the true customer base.

So if we were answering this question and had access to the complete and true data set of product use, we’d say that 50% of customers use the feature, the average between them. And that’s something that’s easy to see when you have the full picture. Just create a one or zero, based on whether the customer uses the feature and take the average across your base. It’s a very quick query.

But let’s add some bias into this scenario, and say that due to the opt-in nature of product analytics, the red group is twice as likely to send analytics to us. In this case, the data that we receive is comprised of two thirds of customers from the red group, and one third of customers from the yellow, as represented by the bars on the slide. While this is not the true mix of customers in reality, it does represent the mix, the work query in our product analytics.

And a direct query on this data set will produce that just 37% of customers use the feature, as this set is now biased to the red group, who use the feature a lot less. Now, is this what’s going on in practice? Well, no, it is what we see on the other end, based on who’s kindly decided to send this product analytics to work with.

And how about if we said instead that we have a larger bar that’s three times, but this time towards the yellow group, who use the feature a lot more. In this case, we’ve directly query an answer that 70% of customers use the feature, well above the answers of the last two. As you can see, it can be a bit of a problem, because the reality is that querying the… Sorry, directly querying bias data gives an answer that is both a function of how many customers actually use a feature, and how that feature use varies within the biases of your data set that you’re working with. For this example, our analytics data could create a raw response, anywhere between 10% and 90% depending on how extreme the bias is towards either group. With that in mind, you have to be wary of whether your data is providing much value or whether it’s just providing an undue confidence in a certain number.

And you might be thinking that the bias I just described is extreme, but here is an actual bias that exists in the product analytics of one of our tools where larger licenses, as defined by how many users their license is for, almost half is likely to opt-in and send us analytics compared to smaller ones. Now, if you’re analyzing a feature that appeals to those trying to manage and deploy our products at scale, this particular bias can become very impactful.

Now, you might be thinking of some other techniques you could use or judgment you’d apply in dealing with the bias in creating an advance after that last exam.

How Many Users Are Impacted by this Platform Change?

But that one was isolated. And the reality is, is that over time, we have many analysts that create many analysts or many answers across many products. It’s never been Just a one off deep dive into the data and forget about it thereafter. So let’s move on to our next example, of how many users are impacted by a given platform change, something that affects many of our products.

In this example, let’s say that we have three analysts, Jim, Jane, and John, who each work on products A, B, and C, respectively.

Let’s start off the Jim. And then Jim after receiving this question about how many users are impacted, runs this query and sees that 15 users are affected in our product analytics data. The true amount of users affected, remembering that our product analytics is just a sample is actually 45.

And Jim doesn’t do anything to number he sees expecting everyone else to do the same, and reports back to the business that 15 users are impacted by product A.

And then let’s move on to Jane. So Jane goes and runs a similar query for her product and finds that 10 users are affected. Again, because that data is a sample, the true number is actually 30.

Now Jane goes and extrapolates based on a simple logic of how many people are finding home. So for this example, let’s say that we have 50% of customers finding home, so Jane doubles her number so we’ve got 50%, look like 100%, and reports that 20 users are impacted by product B.

And lastly, for John, John sees eight users and also does an extrapolation, that tries and control for that size bias we’ve talked about earlier. He operates large licenses and generates an estimate, that 25 users are impacted by product C.

Now, if you’re following along, you’ll see that based on what’s truly going on for our customers, product A has the largest impact from the platform change and reported the lowest amount of users impacted. And conversely, John, despite being the closest to reality with his estimate, and objectively doing the best job reported the highest impact for product C, when in fact, it has the lowest impact. This example is one where the incomplete data has given a completely wrong view of the impact of the platform change and had a detrimental impact on decision making. Not necessarily because one method apply consistency would give a consistently, excuse me, would give the wrong answer, but because people were using different ones. And now this might come across as a bit extreme, because if this question was asked, in practice, you’d see a much tighter collaboration between analysts. But imagine that Jane is doing her analysis two months after everybody else. Jim’s analysis was a quick and unrelated one that a product manager has gone and taken the numbers from. And John could be in a completely different department. When moving fast and at scale, ensuring past experiments and insights are leveraged, examples like this can become very possible.

And so hopefully from those examples, where things have falling down a bit, you agree that biased and incomplete datasets can cause a few problems in practice. Really, that has the potential of becoming a bit of a wild West, if everyone goes and does their own thing. Which leads us to the bigger topic of this page. And that is how we try and solve this by providing self-serve estimation.

We Always Need to Do Estimation to Try and Get the Number Right

And why estimation? Well, because every answer which we create without data is an estimation of what we think is going on in reality. And it’s a gritty kind of estimation. It’s not an A, B test, where you randomly assign cohorts with different experiences, and estimate the uplift and error according to nine formulas, or a truly random sample where you might have 10% of the population, but a random 10%. One that you can reliably infer from and provide confidence intervals of how wrong you might be. Instead, we know that our sample of product analytics is biased. We have the statistics to prove it well beyond reasonable doubt. And it’s impossible for us to know what we don’t know to truly know what reality looks like. So really, we’re always is true trying to estimate. An estimation is sort of a fair bit in statistics under the random setting, having a lot of defined properties that are desirable. Things like unbiasness that your number will be accurate on average. Or consistency, where your estimate gets more and more precise as your data volume increases. Or efficiency, where using the best metric to estimate some unknown quantity.

But in practice, there are more things that you want when building estimates at scale to help a business make decisions. Things like ordered ability, that you can look at the numbers and see or reproduce the calculations that got them. Consistency in the second sense now, the one analyst is using the same methodology as another. So decision makers are able to compare apples with apples. And best practice that ideally each estimate makes best efforts to control for biases in the data. Understanding that stakeholders are on board and trust the methodology having some appreciation for where it has limitations. And last one I mentioned that I could go on is accessibility, that analysts up and down the spectrum of technical and programming abilities have all had the ability to use a methodology. And now you still want the ones on the left, they’re very important, but those on the right are a tad more gritty, and when things go wrong in this, there are sometimes though pretty hard in practice.

So to build out something that satisfies these properties, and we’ll reflect on them later on, we provide self-serve estimation to anyone using a product analytics.

We Provide Self-Serve Estimation

So what does that mean?

We’ll progressively get through the detail,

How it works (the high level)

but how it works at a higher level is that for all licenses that are not send these analytics, we provide similarity scores between them and the licenses that do send analytics to us. A license that we don’t know about, effectively distributes itself over the pool of known licenses, saying that I’m X percent similar to that one, and Y percent similar to another. Now these scores allow the user to quickly create a predictive environment for those who don’t send us analytics data, allowing them to work with any metric as if they have the full picture.

From a practical perspective, it looks something like this, where the analyst first goes and creates a data set of licenses and metrics, where some in this case license A, do not send this data and do not have a known value for the measure they’re looking at. The second step is that the touch on the similarity scores you provided through a simple join, were here A, is considered 5% similar to B, 8% to C, and so forth. And then the user can estimate the unknowns or product of the similarity score, and the known metrics. So if for A, is estimated as 5% of the value for B, plus 8% of the value for C, and so forth, ultimately creating a predicted value for that unknown license. By now having attached is estimated on, the analysts can calculate metrics, effectively ignoring that they’re working with a sample of the data, such as estimating the average of this metric across the whole customer base, not just the ones that were observing. Now let’s go down even further because I’ve described similarity scores without any guidance on how I even generate them.

I first have to create similarity scores is to build a predictive model to product use based on attributes that we know for all licenses, irrespective of whether they send this product analytics or not. To give some examples of this, think of things like how long they’ve owned the product, how many seats, they have there licensed, how they own and various other products all the way to things like how they engage with our ecosystem and accessibility. Now, just like more traditional examples, imagine that those who don’t send us product analytics are our validation set, and we’re trying to optimize our model on the known data, to ultimately predicts on them.

This is taking the assumption that the better we can describe the known data, the better we can describe the unknown.

Now, in order to create similarity scores rather than just have a dot predict capability, we build a random forest to the problem I just described. We use a random forest as we’re able to capture its internal tree structure to define similarity. This is because ignoring the bagging aspect, a random forest predicts using a weighted average of the training data that lands in each terminal node. That is predicting an unknown example, is just created through a weighted average of the training data. And if you haven’t made that link already, is those weights in the weighted average that we use as our similarity scores. That they can be extracted with the formula on the slide, namely similarities as a fraction of terminal nodes, two licenses landing together, normalized for how many other known examples in that terminal node and taken across all trees in the forest. And that random forest is great for us compared to other kernel or similarity methods because the supervised element provides feature selection in defining similarity, as opposed to other methods like (mumbles) neighbors, which treats all input variables equally. And the tree structure makes it simple as we don’t need to pre-process, normalize or marginalize any of our features for a distance calculation. Once we have these weights for similarities, we dump those and the scored out model output into our Data Lake. These scores are available to all users to estimate any arbitrary metric. It can be what we trained on to, or maybe there’ll be used to estimate usage of a given feature that it wasn’t explicitly trained on. Sound familiar to the earlier example? This is basically transfer of a train model to a different response. And just like other circumstances, you might have seen it used in some responses do better at transferring to a certain situation than others. Accordingly, we have trained forest do additional responses such as product co-use. Here an important characteristic in defining similarity is whether a customer actually owns both products, something that might be uninteresting when just predicting generic usage or scale.

Now, we dump each model into our Data Lake, and the user is able to select which one they want to use for their estimation.

And talking about users, diving into Code Next, I wanted to show the simplicity from a UX perspective to create estimates. This is in the context of the earlier example of one, two, three, where you create your metric on known data, attach similarity, and then work out an estimated column to work with. Now, the top there in the custom table expression, the user goes and defines their metrics on analytics data however they want. They own that section and it can be broadly applied to whatever they’re looking at. Now from there, they just draw them on the similarity tables and take the similarity weighted average of known examples. That is, some similarity multiply by the metric within similar licenses. Now, a tree query does have other metadata, like the model chosen and a date of the model score. But the crux of the code is that simple. And a few extra lines of SQL, that go from an observed data set to a using a fairly complex and consider model to build an estimate for the full population, something that was sort of unavailable to them before.

The User Experience

And we can go further. But next on the screen being the code on how the tree structure would be used in SQL to generate an estimate. I’m going to run through this directly and sorry for maybe scaring you. But it’s one of the blessings of doing this remotely, that is hopefully easy for you to pause or go back for a recording if you’re interested. Now, you might be wondering why a data worker might want the tree structure rather than just using similarity scores. And the answer is that known data is a bit of a simplification, that can break down, the more complex cases. To provide an example of this app on-premise products have versions that the customer chooses to upgrade to or be on. And perhaps an analytics event was bugged for a given version, meaning that the similarity for customers on those versions need to be dropped out in the calculation, as the analytics for them in this calculation is unfortunately meaningless, and they’re basically unknown.

From a platform perspective, we start with unknown input variables for all licenses sitting in our Data Lake. We run a daily scheduled Databricks network that scores all our models. Those models were built in another network and scored into MLflow that can be accessed by the daily scoring job. And we just use the random forest implementation in SKlearn. Now the job populates model results back into the data lake as discussed earlier, with those tables now able to be combined with product analytics, allowing the data worker to quickly create estimates and largely forget they we’re working with the bias and incomplete data set, fairly happy days. And before we move on to the next section, I wanna come back to the properties we discussed earlier and see how the solution shaped up. So from an ordered ability perspective, we have a model built in Databricks Notebooks and stored in Mlflow, all the ordered ability that provides. And from our practical perspective, the analyst just needs a graph that use that logic from that day and the calculation will flow. Consistency, it’s absolutely so when multiple analysts are using extrapolate logic, rather than generating their own. For best practice, we can always tune our model more and more. Now this state, it’s using many available attributes and fine tune way to the known data. Something that people couldn’t easily replicate on the fly or build it out within a SQL group. And understanding, it effectively lets analysts outsource that to us, where we’ve socialized the results to a wider on-premise business, meaning they need to spend less time explaining techniques and more time building insights. And lastly, for accessibility, any analyst just needs to use SQL. It’s as much as helpful, they don’t really need a comprehension or a supervised learning, for random forests or so forth, to leverage this output. So hopefully, you’ll agree that it serves these properties pretty well. And now you’ve got a bit of a taste for scaling out estimation in organization while at self-serve, we want to make things a bit more tricky and run through sub-additive metrics in this framework. Because everything we spoke about in the past falls down when metrics aren’t affine or additive.

What’s a good example of a metric like this?

Where Our Similarity Scores Fall Down: Subadditive Metrics

Monthly Active Users or as is often known MAU.

The distinct count of people using a product set or a set of products, or maybe some functionality within them for a given window of time.

So it might be a day or a year, or basically anything else as well.

Now, before I move on, I’d like to flag that all this estimation has a huge asterisk on them. And this definitely isn’t something we report on externally. We stick to our cloud products that for it’s known and reliable.

And we do this estimation,

because there are product teams in our on premise business still want to try and understand this for the purpose of their decision making. Even when the estimate comes with a fairly hefty error bar and originally gave us a bit of a headache. Well, why the headache? Because it’s a distinct count. And in Atlassian, our products work great together, and users often interact with a lot of them.

So in this equation here, sticking with MAU, on the left, you have a JIRA instance with 14 Monthly Active Users next to a Confluence instance with 11. And when you put them together, you calculate a MAU of 17. Only three of those users in the confluence were not active users of JIRA. And again, I’ll stress that when you have the complete data set, this calculation is easy and reliable. Just grab your universe and do a count distinct.

What happens when we remove some data? Well, 14 plus question mark is unfortunately question mark, it all becomes unknown. And we can go even further not knowing the value for the first license either.

But for an analyst trying to create a distinct count estimate in this environment, they need to consider who in an unlike known license isn’t already captured in their known data. That is, they only need to add on the new users in the unknown license. And further, they need to repeat that for the other license. Now also considering who they’ve latently added from the first one and making sure that they are excluded from the second one. As difficult as that already becomes, we’re then back to square one with all of our problems.

We’re Back to Square One with All Our Problems

Because when people try and make their own approximations or assumptions or a distinct count, although they might be using a linear similarities for per instance license estimation, they’re adding their own flavor on top of it. And the math of consistency, is a consistent, but inconsistent, is still inconsistent, maybe a little bit more than if the first part was also random and not to the discretion of the analyst, but it’s still that original problem that we faced earlier.

What does a typical equation in this world look like? That is, what is nine plus 10? Well, if we have a customer with two different products like JIRA and Confluence in the same geography, it might be 11, when put together. There’s a lot of co-use, as our product work really nicely together.

But what if it was the case of one of our multinational customers who have say a JIRA license in the USA, and maybe one in Sweden or somewhere in a completely different geography. In that case, none of the users might overlap at all, and nine plus 10 is, as you’d normally say, mundane. There’s plenty of room for interpretation. But how these calculations should be estimated, and we’re left asking, how will we provide a similar framework to solving this problem of estimation at scale.

How Do We Provide a Similar Framework?

And that’s what we run through now. That even when things get tougher, estimation has to be made a lot easier for the organization.

And I find that in the interest of time and understanding a lot of the technical details here will be in the slides, and will remain fairly high level in the space. When starting off, we can use predictive model, but this time for the distinct element of MAU or whatever this other calculation is. This time, it’s an iterative process, where what we say is, the MAU of a set of licenses is equal to the MAU of that set minus one license plus a fraction of the MAU in that license we excluded. So what does that mean? Well, maybe we have two licenses with 20 MAU together. And then there’s a third that has 10 MAU. But when you put it all together is 25. So 20 plus 10 is 25. In that case, it’s 20% plus 50% of that 10 MAU, in order to make 25. And it’s at 50%, or the fraction of the next license that we should include that we go and predict.

Because we have some data, we’re actually able to build out a supervised model for this fraction, or the weights based on characteristics about the set of licenses that we’ve already considered, and the one that were adding. Things like whether it’s the same product, a different geography, relative sizes, and so on. And this model goes the whole way up from putting two instances together to maybe adding an 11th license to 10 that have already been considered. By using known affected MAU per license using that pass method of generating it, we can then score this out to everyone and to create a per customer MAU estimate. Start with the first predictive way for the second and do the same to the third and so on until all licenses for the customer are considered.

And from this, we have the ability to estimate total MAU, not just MAU in a single license in isolation. And for this estimation framework, our next step is to allocate back the sub-additive total to individual licenses, effectively creating an additive metric. We do this proportionately. And a simple example of what this means, is I have two licenses, one with 10 users and one with 20. So the second one is double the first. Let’s say that together, then we only have 24 MAU, not 30. Here, we need to allocate that 24 back to the individual licenses. So in doing that, we say the first one is worth eight, and the second one is worth 16. So they’re both scaled down by 20%, and they sum up to 24 then MAU together. Now a large instance still has an allocation twice the size of the first, and is proportional to MAU considered alone. Now, once that’s done, we focus on how the known licenses predict the unknown. Because for all estimation magic in predicting the unknown, what actually goes on, is that we’re just smartly scaling up known data in order to estimate the whole, that’s all were doing. And now in that sense, if I’m unknown license, and there’s two unknown licenses that have 50% similarity to me each, then everything I do is worth double when scaled out to the whole population. If another user in my license uses a feature, we estimate the two more did in the whole population, or it express it is a license as an influence of two. That influence score or two in a situation as described is what we’d build out for each license in estimation.

And once we have those influence scores, they can be transferred quickly towards any active use calculation, such as the distinct count of users of a single feature only. It’s the same idea, as we could before. Analogous to how we provide similarity scores for multiple responses transfer from in the linear case, we provide influence scores in multiple windows, such as a single product window, or all products together, or maybe just a single geography. This is because the allocated MAU methodology changes on what universe of licenses you look at, it shrinks the greater a universe is. But with those influence scores, it becomes really, really simple to build a distinct count estimate, as we now run through.

And so back to this UX perspective in a Bridge Code, and this world for top query, again, has a user defined calculation of active users and the join is a simple attachment of influence scores that they need to use to scale their estimate up. Now, you might be wondering why that early code snippets, weren’t as easy, and this is because it’s a distinct count. Data workers can only work with the universal of licenses that will built this calculation, rather than any arbitrary subset, of say just these 10 licenses. The limitation here, actually creates simplicity for it’s use, as we no longer have to account any arbitrary situation.

But hopefully, if you see, it’s very easy to create estimates of active use, despite the complexity of the world that they’re living in. Biased and incomplete data plus trying to estimate a sub-additive metric on top of it. I also want to go back to the earliest slides of properties. And thinking through some of those on the right, like consistency and accessibility, you find that this method satisfies them in the same way that did before.

Oh, so thanks for sticking with us this far. And we’ll move into our summary and Q and A now.

So we went fairly deep into the technical implementation about estimation methodology, but I wanna get back to the high level crux of that problem and solution.

Our on-premise product teams rely on a biased and incomplete data set or product use to help them better understand the customer and build better products. And we want to infer from that as best we can in a scaled manner. Through using accessible smarts, we’ve got a method for all the other workers to create consistent and accurate estimations for the whole population. With the end result being quicker and more trustworthy insights back to the team. It’s that simple. And before Q and A, I’d like this opportunity to remind you about writing feedback. We’d love it. Please be as candid as you like and can be. Feedback is a gift for us to try and improve. And otherwise, we’ll be online for Q and A now. Apologies in advance if anything sounds a bit dopey, as I believe it’ll be around 8:00 am in Sydney. (mumbles) But thanks a ton for attending, we both hope you got something out of it and you can make some connections to your own organization and the way it uses data. I’m sure you’ll find something incomplete sitting around somewhere.

Watch more Spark + AI sessions here
or
Try Databricks for free
« back
About Luke Heinrich

Atlassian

Luke Heinrich is a Sydney based analyst at Atlassian where he drills into all things data and decisions. Previously, he spent his time developing personalisation algorithms for retailer e-commerce in Australia and holds another life as a fellow of the Actuaries Institute of Australia. Outside of work, Luke loves to read and will gladly talk your ear off about cricket.

About Mike Dias

Atlassian

Mike Dias is a data engineer at Atlassian, making the move from Sao Paulo to Sydney to manage the data ingestion pipeline for Atlassian's on-premise products. Prior to Atlassian, Mike built real-time streaming pipelines for an e-commerce retailer. He is an extremely fit long-distance runner, cooks a mean barbecue and loves to explore Sydney.