Fourth Week with Digital Analytics
This week, the prodigal sabbatical continues; I will continue to boost my progress with EAS and try finishing off my test by this weekend and then probably set my sights on Google Data Studio and Facebook Analytics. GA4 should probably be in the backseat for about another two weeks, after which I will go back and bring that to completion. Also, after toggling the analytics on my account, I found an alarming prospect; I have completed 30% of my course timeline, but my course progress hasn’t caught up yet and remains in the outskirts of 10%. With extensive planning this week, I aim to bring this around.
As my final informal note for this week, I wish to inform you that I have begun my first week at business school. As a freshman at NALSAR University’s IPM program, I have started to reach out to new people and begun to expand my network.
Getting into the intricacies of the course now, I started my week off with Sum and count commands. Talking about the Sum command, it can be further broken down into three sub-commands.
Under the Sum command,
Sum
This command essentially sums everything in a given row
Manual Code (for example): =sum(C1:C4)
Automatically, it can be triggered by going to Formulas → Autosum
Sumif
This command sums up the column on one condition
Manual code (for example): =sumif(range, criteria,[sum,range])
As a tip, instead of specifying values as text, we should reference them to the tab that holds the value. This way, if we change it up and go to another value, it changes automatically.
Sumifs
This command again sums up columns on condition; the only caveat being here the conditions are in plurality
Manual code (for example): =sumifs(sum_range, criteria_range1, criteria1, [criteria_range2, criterai2]...)
Under the Count command,
CountA
This command counts the number of text items
Manual code (for example): = count(C1:C4)
Usually, B:B -1 is used → this way, we count all the values in the column, minus the header.
CountIf
This command counts based on a given condition
Manual code (for example): =countif(range, criteria)
Countifs
This command again, counts on conditions, the only difference being their plurality
Special focus being drawn over to the differences between Count and CountA commands; count command counts cells with numerical values as compared to the addition of text values in the case of CountA.
Moving on to absolute versus relative cell references, which as I realized with time, can often be the difference between a working formula and chaos, if used incorrectly. Starting off with basically what they mean and why they’re significant. A relative cell reference basically is when your selected cell is increased or decreased depending on whether (or more importantly, where) you move it inside the table. At a fundamental level, we know for a fact that we expect values to change as we move across a table; therefore, toggling this option gives us the option of using a formula to obtain a dynamic value that changes for each column depending on values. Absolute cell referencing on the other hand, is a whole other ballgame; here the fundamental remains the fact that the values need to be constant, no matter where the formula or values go.
There are ways we can set this up, and we can also cross-reference to reflect this as either row or column (we can also keep both of them constant, but that just yields the same value which negates the need for a formula). Essentially, we keep columns or rows constant by using the ‘$’ sign before the required field in the formula.
Coming onto Pivot Tables, which Fred Pike, my instructor calls to be one of the defining aspects differentiating a person proficient in Excel and a newbie. We basically require there to be a certain set of pre-requisite conditions; one, the data has to be contiguous and two, there have to be headers.
The rows in the finally created Pivot Table will be the headers in the original table. Additionally, you can also assign fields to rows, columns to values. Filtering from there is pretty straightforward; you get a drop-down menu for each header and move ahead from there. I am attaching an image below, illustrating both, the constituents of a pivot table and the difference between Dimensions and Metrics.
.
Moving onto segmenting pivot tables, we use three ways to complete this; slicer, filter, and timeline. Timeline is the most self-explanatory out of the three of them; it works on the date column only, and gives you a slider that breaks down data on that basis. Slicer is the most versatile one, I see it as a ‘one step upgrade’ from the filters, since by the click of a button we can segment our data on the basis of what row we have chosen.
We then visit the ‘Power tips for Pivot Tables’ in which I found a lot of new insights. First off, setting up our tables and toggling on the ‘Tables’ option seems rudimentary and unnecessary, but as it turns out, there are a lot of backend options that toggle on as well which help out in the long run. Like, if we add a new column of values, it is going to be essentially quicker to reflect in our Pivot Tables, as compared to manually adding it in our settings later.
Additionally, we also have the option of creating a page (or sheet) for each filter item. This can be toggled after we have created a Pivot Table by going into Analyse → Options → Show Report Filter Pages. We essentially get a new pivot table for each of the options in the sub-heading we have chosen. Obviously, this makes a lot more sense for a filter that has lesser number of options; we certainly don’t need any clutter in the form of multiple sheets.
Furthermore, we expanded onto the ‘Show values as’ option and moved onto turning off the often-uncomfortable ‘Autofit Columns’ option which gives us the option of having a static width of the column we have selected.
Since we already have made Pivot Tables, we can additionally include Pivot Charts in the mix as well, to assist us with pictorial representation. Keep in mind, this works only when the select fields are relatively less in number, or else we end up with a lot of clutter and a graph that makes little sense.
Alright, this has been my progress with EAS so far! Hope you enjoyed reading this essay and have learnt something new that you can implement in your next project.In the upcoming week, I still need to complete EAS and give the all-feared test that I have been procrastinating about. Then, I will move back to GA4, if I don’t find any alternate domain I can boost my scores with. Until then, live long and prosper.
That’s it for this week! I will continue these strings every week as usual. I will be linking my website below, make sure you check it out. Till then, live long and prosper.
As always, I’ll link my website down below. Do check it out!
https://datalytics101.weebly.com/
Additionally, check out the CXL site as well!
Comments
Post a Comment