You have already completed the Test before. Hence you can not start it again.
Test is loading...
You must sign in or sign up to start the Test.
You have to finish following quiz, to start this Test:
Your results are here!! for" PL-300 Practice Test 9 "
0 of 52 questions answered correctly
Your time:
Time has elapsed
Your Final Score is : 0
You have attempted : 0
Number of Correct Questions : 0 and scored 0
Number of Incorrect Questions : 0 and Negative marks 0
Average score
Your score
PL-300
You have attempted: 0
Number of Correct Questions: 0 and scored 0
Number of Incorrect Questions: 0 and Negative marks 0
You can review your answers by clicking view questions. Important Note : Open Reference Documentation Links in New Tab (Right Click and Open in New Tab).
1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
19
20
21
22
23
24
25
26
27
28
29
30
31
32
33
34
35
36
37
38
39
40
41
42
43
44
45
46
47
48
49
50
51
52
Answered
Review
Question 1 of 52
1. Question
You have multiple dashboards. You need to ensure that when users browse the available dashboards from powerbi.com. they can see which dashboards contain Personally Identifiable Information (Pll). The solution must minimize configuration effort and impact on the dashboard design.
Correct
Data classification tags show up next to the dashboard name, letting anyone viewing it know the level of security that should be applied to the dashboard and the data it contains.
You can put tags like HBI (High Business Impact), LBI (Low Business Impact), and MBI (Medium Byuinsess Impact) etc and it will sjow on top of each dashboard. https://docs.microsoft.com/en-us/power-bi/create-reports/service-data-classification
Incorrect
Data classification tags show up next to the dashboard name, letting anyone viewing it know the level of security that should be applied to the dashboard and the data it contains.
You can put tags like HBI (High Business Impact), LBI (Low Business Impact), and MBI (Medium Byuinsess Impact) etc and it will sjow on top of each dashboard. https://docs.microsoft.com/en-us/power-bi/create-reports/service-data-classification
Unattempted
Data classification tags show up next to the dashboard name, letting anyone viewing it know the level of security that should be applied to the dashboard and the data it contains.
You can put tags like HBI (High Business Impact), LBI (Low Business Impact), and MBI (Medium Byuinsess Impact) etc and it will sjow on top of each dashboard. https://docs.microsoft.com/en-us/power-bi/create-reports/service-data-classification
Question 2 of 52
2. Question
You are developing a sales report that will have multiple pages. Each page will answer a different business question.
You plan to have a menu page that will show all the business questions. You need to ensure that users can click each business question and be directed to the page where the question is answered.
The solution must ensure that the menu page will work when deployed to any workspace.
What should you include on the menu page?
You create a dashboard by using the Microsoft Power Bl Service. The dashboard contains a card visual that shows total sales from the current year. You grant users access to the dashboard by using the viewer role in the workspace. A user wants to receive daily notifications of the number shown on the card visual.
What should the user can do?
Correct
The user should create a subscription to receive daily notifications of the number shown on the card visual.
Here’s why:
Subscriptions in Power BI allow users to receive scheduled emails with snapshots of reports, dashboards, or specific visuals.
Data Alerts are designed to notify users when specific data thresholds are met. While useful for monitoring data changes, they are not suitable for receiving regular updates on a specific value.
Sharing the dashboard provides access to the dashboard itself, but it doesn’t automatically deliver notifications.
By creating a subscription, the user can receive the desired daily updates of the total sales number directly in their email inbox.
The user should create a subscription to receive daily notifications of the number shown on the card visual.
Here’s why:
Subscriptions in Power BI allow users to receive scheduled emails with snapshots of reports, dashboards, or specific visuals.
Data Alerts are designed to notify users when specific data thresholds are met. While useful for monitoring data changes, they are not suitable for receiving regular updates on a specific value.
Sharing the dashboard provides access to the dashboard itself, but it doesn’t automatically deliver notifications.
By creating a subscription, the user can receive the desired daily updates of the total sales number directly in their email inbox.
The user should create a subscription to receive daily notifications of the number shown on the card visual.
Here’s why:
Subscriptions in Power BI allow users to receive scheduled emails with snapshots of reports, dashboards, or specific visuals.
Data Alerts are designed to notify users when specific data thresholds are met. While useful for monitoring data changes, they are not suitable for receiving regular updates on a specific value.
Sharing the dashboard provides access to the dashboard itself, but it doesn’t automatically deliver notifications.
By creating a subscription, the user can receive the desired daily updates of the total sales number directly in their email inbox.
You build a report to help the sales team understand its performance and the drivers of sales. The team needs to have a single visualization to identify which factors affect success. Which type of visualization should you use?
You have four sales regions. Each region has multiple sales managers.
You implement row-level security (RLS) in a data model. You assign the relevant distribution lists to each role.
You have sales reports that enable analysis by region. The sales managers can view the sales records of their region. The sales managers are prevented from viewing records from other regions. A sales manager changes to a different region. You need to ensure that the sales manager can see the correct sales data
What should you do?
Correct
AD Security groups make the most sense for a business spread across 4 regions. having to change the security permissions in Power BI every time there is staff turnover is a lot of admin. Adding/removing a security group takes a couple of seconds and is hassle-free
Incorrect
AD Security groups make the most sense for a business spread across 4 regions. having to change the security permissions in Power BI every time there is staff turnover is a lot of admin. Adding/removing a security group takes a couple of seconds and is hassle-free
Unattempted
AD Security groups make the most sense for a business spread across 4 regions. having to change the security permissions in Power BI every time there is staff turnover is a lot of admin. Adding/removing a security group takes a couple of seconds and is hassle-free
Question 6 of 52
6. Question
You are configuring a Microsoft Power BI data model to enable users to ask natural language questions by using Q&A.
You have a table named Customer that has the following measure.
Customer Count = DISTINCTCOUNT(Customer[CustomerID])
Users frequently refer to customers as subscribers. You need to ensure that the users can get a useful result for “subscriber count” by using Q&A.
The solution must minimize the size of the model.
What should you do?
You use an R visual to produce a map of 500,000 customers. You include the values of [CustomerID], Latitude, and Longitude in the fields sent to the visual. Each customer ID is unique.
In powerbi.com, when users load the visual, they only see some of the customers.
What is the cause of the issue?
Correct
Data size limitations – data used by the R visual for plotting is limited to 150,000 rows. If more than 150,000 rows are selected, only the top 150,000 rows are used and a message is displayed on the image. Additionally, the input data has a limit of 250 MB. https://docs.microsoft.com/en-us/power-bi/visuals/service-r-visuals
Incorrect
Data size limitations – data used by the R visual for plotting is limited to 150,000 rows. If more than 150,000 rows are selected, only the top 150,000 rows are used and a message is displayed on the image. Additionally, the input data has a limit of 250 MB. https://docs.microsoft.com/en-us/power-bi/visuals/service-r-visuals
Unattempted
Data size limitations – data used by the R visual for plotting is limited to 150,000 rows. If more than 150,000 rows are selected, only the top 150,000 rows are used and a message is displayed on the image. Additionally, the input data has a limit of 250 MB. https://docs.microsoft.com/en-us/power-bi/visuals/service-r-visuals
Question 8 of 52
8. Question
You have a Microsoft Power Bl dashboard. You need to ensure that consumers of the dashboard can give you feedback that will be visible to the other consumers of the dashboard.
What should you use?
You have a Microsoft SharePoint Online site that contains several document libraries. One of the document libraries contains manufacturing reports saved as Microsoft Excel files. All the manufacturing reports have the same data structure. You need to load only the manufacturing reports to a table for analysis. What should you do on Microsoft Power Bl Desktop?
Correct
Once you are able to get data from the SharePoint folder, let’s do some filtering on the “Folder Path” column and select the folder where your data is being stored so you only get the files from it.
Once you are able to get data from the SharePoint folder, let’s do some filtering on the “Folder Path” column and select the folder where your data is being stored so you only get the files from it.
Once you are able to get data from the SharePoint folder, let’s do some filtering on the “Folder Path” column and select the folder where your data is being stored so you only get the files from it.
You have several reports and dashboards in a workspace. You need to grant all organizational users to read access to a dashboard and several reports.
Solution: You publish an app to the entire organization.
Does this meet the goal?
You have several reports and dashboards in a workspace. You need to grant all organizational users to read access to a dashboard and several reports.
Solution: You assign all the users the Viewer role to the workspace.
Does this meet the goal?
Correct
Yes.
Assigning the Viewer role to all organizational users grants them read-only access to all content within the workspace, including dashboards and reports. This ensures that they can view the desired reports without being able to modify or delete them.
Incorrect
Yes.
Assigning the Viewer role to all organizational users grants them read-only access to all content within the workspace, including dashboards and reports. This ensures that they can view the desired reports without being able to modify or delete them.
Unattempted
Yes.
Assigning the Viewer role to all organizational users grants them read-only access to all content within the workspace, including dashboards and reports. This ensures that they can view the desired reports without being able to modify or delete them.
Question 12 of 52
12. Question
You publish a Microsoft Power BI dataset to powerbi.com. The dataset appends data from an on-premises Oracle database and an Azure SQL database by using one query. You have admin access to the workspace and permission to use an existing On-premises data gateway for which the Oracle data source is already configured. You need to ensure that the data is updated every morning. The solution must minimize configuration effort.
Which two actions should you perform when you configure scheduled refresh? Each correct answer presents part of the solution.
NOTE: Each correct selection is worth one point.
You have a report that contains three pages. One of the pages contains a KPI visualization. You need to filter all the visualizations in the report except for the KPI visualization. Which two actions should you perform? Each correct answer presents part of the solution.
NOTE: Each correct selection is worth one point
Correct
To filter all visualizations in a report except for the KPI visualization, you should perform these two actions:
Configure a report-level filter:
This will apply the filter to all visualizations within the report.
Edit the interactions of the KPI visualization:
Specifically, you would set the interaction of the KPI visualization to “None.” This ensures that the report-level filter does not affect the KPI visualization, allowing it to display the unfiltered data.
Explanation:
Report-level filters are applied globally to all visualizations within the report, providing a consistent filtering experience across all pages.
Visual interactions allow you to control how visualizations are affected by selections made in other visualizations. By setting the KPI visualization’s interaction to “None,” you effectively exempt it from the influence of the report-level filter.
By combining these two actions, you can achieve the desired result of filtering all visualizations in the report except for the specific KPI visualization.
To filter all visualizations in a report except for the KPI visualization, you should perform these two actions:
Configure a report-level filter:
This will apply the filter to all visualizations within the report.
Edit the interactions of the KPI visualization:
Specifically, you would set the interaction of the KPI visualization to “None.” This ensures that the report-level filter does not affect the KPI visualization, allowing it to display the unfiltered data.
Explanation:
Report-level filters are applied globally to all visualizations within the report, providing a consistent filtering experience across all pages.
Visual interactions allow you to control how visualizations are affected by selections made in other visualizations. By setting the KPI visualization’s interaction to “None,” you effectively exempt it from the influence of the report-level filter.
By combining these two actions, you can achieve the desired result of filtering all visualizations in the report except for the specific KPI visualization.
To filter all visualizations in a report except for the KPI visualization, you should perform these two actions:
Configure a report-level filter:
This will apply the filter to all visualizations within the report.
Edit the interactions of the KPI visualization:
Specifically, you would set the interaction of the KPI visualization to “None.” This ensures that the report-level filter does not affect the KPI visualization, allowing it to display the unfiltered data.
Explanation:
Report-level filters are applied globally to all visualizations within the report, providing a consistent filtering experience across all pages.
Visual interactions allow you to control how visualizations are affected by selections made in other visualizations. By setting the KPI visualization’s interaction to “None,” you effectively exempt it from the influence of the report-level filter.
By combining these two actions, you can achieve the desired result of filtering all visualizations in the report except for the specific KPI visualization.
The Impressions table contains approximately 30 million records per month.
You need to create an ad analytics system to meet the following requirements:
* Present ad impression counts for the day, campaign, and [Site_name]. The analytics for the last year is required.
* Minimize the data model size.
Which two actions should you perform? Each correct answer presents part of the solution.
NOTE: Each correct selection is worth one point.
You have a dataset named Pens that contains the following columns:
*Unit Price
*Quantity Ordered
You need to create a visualization that shows the relationship between Unit Price and Quantity Ordered. The solution must highlight orders that have a similar unit price and ordered quantity.
Which type of visualization and which feature should you use? To answer, select the appropriate options in the answer area.
NOTE: Each correct selection is worth one point.
DRAG and DROP
You are modeling data in a table named SalesDetail by using Microsoft Power Bl. You need to provide end-users with access to the summary statistics about the SalesDetail data. The users require insights on the completeness of the data and the value distributions. Which three actions should you perform in sequence?
To answer, move the appropriate actions from the list of actions to the answer area and arrange them in the correct order
DAX (Data Analysis Expressions): DAX is a formula language specifically designed for Power BI to perform calculations, aggregations, and data manipulation within the Power BI Desktop environment.
DAX (Data Analysis Expression) is a formula language used in Power BI for data analysis and calculation. It is used in the Data View phase to create measures and calculations
M (Power Query Language): M is the language used in Power Query Editor (part of Power BI Desktop) to transform and shape data before it’s loaded into the data model. It allows for data cleansing, filtering, combining datasets, and more.
Explanation of incorrect options:
B. SQL, Python, R: While Power BI can connect to data sources that might use Python or R scripts, these languages aren’t directly used for data extraction within Power BI.
MDX: MDX (Multidimensional Expressions) is a query language used in OLAP (Online Analytical Processing) databases, not supported by Power BI.
Python: Python is a programming language that can be used in Power BI through the Power BI Python API, but it is not a native expression language for data extraction.
R: R is a programming language and environment for statistical computing and graphics that can be used in Power BI through the Power BI R API, but it is not a native expression language for data extraction.
Incorrect
The correct answer is:
C. SQL, DAX, M
Here’s why:
SQL (Structured Query Language): Power BI can connect to various data sources using SQL queries to retrieve data.
DAX (Data Analysis Expressions): DAX is a formula language specifically designed for Power BI to perform calculations, aggregations, and data manipulation within the Power BI Desktop environment.
DAX (Data Analysis Expression) is a formula language used in Power BI for data analysis and calculation. It is used in the Data View phase to create measures and calculations
M (Power Query Language): M is the language used in Power Query Editor (part of Power BI Desktop) to transform and shape data before it’s loaded into the data model. It allows for data cleansing, filtering, combining datasets, and more.
Explanation of incorrect options:
B. SQL, Python, R: While Power BI can connect to data sources that might use Python or R scripts, these languages aren’t directly used for data extraction within Power BI.
MDX: MDX (Multidimensional Expressions) is a query language used in OLAP (Online Analytical Processing) databases, not supported by Power BI.
Python: Python is a programming language that can be used in Power BI through the Power BI Python API, but it is not a native expression language for data extraction.
R: R is a programming language and environment for statistical computing and graphics that can be used in Power BI through the Power BI R API, but it is not a native expression language for data extraction.
Unattempted
The correct answer is:
C. SQL, DAX, M
Here’s why:
SQL (Structured Query Language): Power BI can connect to various data sources using SQL queries to retrieve data.
DAX (Data Analysis Expressions): DAX is a formula language specifically designed for Power BI to perform calculations, aggregations, and data manipulation within the Power BI Desktop environment.
DAX (Data Analysis Expression) is a formula language used in Power BI for data analysis and calculation. It is used in the Data View phase to create measures and calculations
M (Power Query Language): M is the language used in Power Query Editor (part of Power BI Desktop) to transform and shape data before it’s loaded into the data model. It allows for data cleansing, filtering, combining datasets, and more.
Explanation of incorrect options:
B. SQL, Python, R: While Power BI can connect to data sources that might use Python or R scripts, these languages aren’t directly used for data extraction within Power BI.
MDX: MDX (Multidimensional Expressions) is a query language used in OLAP (Online Analytical Processing) databases, not supported by Power BI.
Python: Python is a programming language that can be used in Power BI through the Power BI Python API, but it is not a native expression language for data extraction.
R: R is a programming language and environment for statistical computing and graphics that can be used in Power BI through the Power BI R API, but it is not a native expression language for data extraction.
You are creating a Microsoft Power Bl imported data model to perform basket analysis. The goal of the analysis is to identify which products are usually bought together in the same transaction across and within sales territories.
You import a fact table named Sales as shown –
The related dimension tables are imported into the model.
The SalesRowID and AuditID can be removed from the model without impeding the analysis goals?
Correct
Both the columns SalesRowID and AuditID can be removed from the model without impeding the analysis goals.
As these fields hold the information related to Sales Details and data processing time (log) which are not necessary to perform basket analysis.
Incorrect
Both the columns SalesRowID and AuditID can be removed from the model without impeding the analysis goals.
As these fields hold the information related to Sales Details and data processing time (log) which are not necessary to perform basket analysis.
Unattempted
Both the columns SalesRowID and AuditID can be removed from the model without impeding the analysis goals.
As these fields hold the information related to Sales Details and data processing time (log) which are not necessary to perform basket analysis.
Question 22 of 52
22. Question
You are creating a Microsoft Power Bl imported data model to perform basket analysis. The goal of the analysis is to identify which products are usually bought together in the same transaction across and within sales territories.
You import a fact table named Sales as shown –
The related dimension tables are imported into the model.
The TaxAmt column must retain the current number of decimal places to perform the basket analysis?
Correct
It is not necessary to retain the decimal values of TaxAmt column (decimal data type) for basket analysis.
As basket analysis is to analyze which products are frequently purchased or ordered. The goal of basket analysis is to analyze the relationship between events.
Note: Two products are related when they are present in the same basket. In other words, the event granularity is the purchase of a product.
Incorrect
It is not necessary to retain the decimal values of TaxAmt column (decimal data type) for basket analysis.
As basket analysis is to analyze which products are frequently purchased or ordered. The goal of basket analysis is to analyze the relationship between events.
Note: Two products are related when they are present in the same basket. In other words, the event granularity is the purchase of a product.
Unattempted
It is not necessary to retain the decimal values of TaxAmt column (decimal data type) for basket analysis.
As basket analysis is to analyze which products are frequently purchased or ordered. The goal of basket analysis is to analyze the relationship between events.
Note: Two products are related when they are present in the same basket. In other words, the event granularity is the purchase of a product.
Question 23 of 52
23. Question
Litware, Inc. Case Study
This is a case study. Case studies are not timed separately. You can use as much exam time as you would like to complete each case. However, there may be additional case studies and sections on this exam. You must manage your time to ensure that you are able to complete all questions included on this exam in the time provided.
-Overview
Litware, Inc. is an online retailer that uses Microsoft Power Bl dashboards and reports.
The company plans to leverage data from Microsoft SQL Server databases, Microsoft Excel files, text files, and several other data sources.
Litware uses Azure Active Directory (Azure AD) to authenticate users.
– Existing Environment Sales Data
Litware has online sales data that has the SQL schema shown in the following table.
In the Date table, the date column has a format of yyyymmdd and the month column has a format of yyyymm. The week column in the Date table and the week_id column in the Weekly_Returns table have a format of yyyyww. The region_id column can be managed by only one sales manager.
Data Concerns
You are concerned with the quality and completeness of the sales data. You plan to verify the sales data for negative sales amounts.
Reporting Requirements
Litware identifies the following technical requirements:
* Executives require a visual that shows sales by region.
* Regional managers require a visual to analyze weekly sales and returns.
* Sales managers must be able to see the sales data of their respective regions only.
* The sales managers require a visual to analyze sales performance versus sales targets.
* The sales department requires reports that contain the number of sales transactions.
*Users must be able to see the month in reports as shown in the following example: Oct 2020.
*The customer service department requires a visual that can be filtered by both sales month and ship month independently.
Question:
You need to create the required relationship for the executive’s visual. What should you do before you can create the relationship?
Litware, Inc. Case Study
This is a case study. Case studies are not timed separately. You can use as much exam time as you would like to complete each case. However, there may be additional case studies and sections on this exam. You must manage your time to ensure that you are able to complete all questions included on this exam in the time provided.
-Overview
Litware, Inc. is an online retailer that uses Microsoft Power Bl dashboards and reports.
The company plans to leverage data from Microsoft SQL Server databases, Microsoft Excel files, text files, and several other data sources.
Litware uses Azure Active Directory (Azure AD) to authenticate users.
– Existing Environment Sales Data
Litware has online sales data that has the SQL schema shown in the following table.
In the Date table, the date column has a format of yyyymmdd and the month column has a format of yyyymm. The week column in the Date table and the week_id column in the Weekly_Returns table have a format of yyyyww. The region_id column can be managed by only one sales manager.
Data Concerns
You are concerned with the quality and completeness of the sales data. You plan to verify the sales data for negative sales amounts.
Reporting Requirements
Litware identifies the following technical requirements:
* Executives require a visual that shows sales by region.
* Regional managers require a visual to analyze weekly sales and returns.
* Sales managers must be able to see the sales data of their respective regions only.
* The sales managers require a visual to analyze sales performance versus sales targets.
* The sales department requires reports that contain the number of sales transactions.
*Users must be able to see the month in reports as shown in the following example: Oct 2020.
*The customer service department requires a visual that can be filtered by both sales month and ship month independently.
Question-
You need to create a calculated column to display the month based on the reporting requirements. Which DAX expression should you use?
Litware, Inc. Case Study
This is a case study. Case studies are not timed separately. You can use as much exam time as you would like to complete each case. However, there may be additional case studies and sections on this exam. You must manage your time to ensure that you are able to complete all questions included on this exam in the time provided.
-Overview
Litware, Inc. is an online retailer that uses Microsoft Power Bl dashboards and reports.
The company plans to leverage data from Microsoft SQL Server databases, Microsoft Excel files, text files, and several other data sources.
Litware uses Azure Active Directory (Azure AD) to authenticate users.
– Existing Environment Sales Data
Litware has online sales data that has the SQL schema shown in the following table.
In the Date table, the date column has a format of yyyymmdd and the month column has a format of yyyymm. The week column in the Date table and the week_id column in the Weekly_Returns table have a format of yyyyww. The region_id column can be managed by only one sales manager.
Data Concerns
You are concerned with the quality and completeness of the sales data. You plan to verify the sales data for negative sales amounts.
Reporting Requirements
Litware identifies the following technical requirements:
* Executives require a visual that shows sales by region.
* Regional managers require a visual to analyze weekly sales and returns.
* Sales managers must be able to see the sales data of their respective regions only.
* The sales managers require a visual to analyze sales performance versus sales targets.
* The sales department requires reports that contain the number of sales transactions.
*Users must be able to see the month in reports as shown in the following example: Oct 2020.
*The customer service department requires a visual that can be filtered by both sales month and ship month independently.
Question-
You need to create a visualization to meet the reporting requirements of the sales managers. How should you create the visualization? To answer, select the appropriate options in the answer
Contoso, Ltd. is a manufacturing company that produces outdoor equipment. Contoso has quarterly board meetings for which financial analysts manually prepare Microsoft Excel reports, including profit and loss statements for each of the company’s four business units, a company balance sheet, and net income projections for the next quarter.
Data and Sources
Data for the reports come from three sources. Detailed revenue, cost, and expense data come from an Azure SQL database. Summary balance sheet data comes from Microsoft Dynamics 365 Business Central. The balance sheet data is not related to the profit and loss results, other than they both relate dates.
Monthly revenue and expense projections for the next quarter come from a Microsoft SharePoint Online list. Quarterly projections relate to the profit and loss results by using the following shared dimensions: date, business unit, department, and product category.
Net Income Projection Data
Net income projection data is stored in a SharePoint Online list named Projections in the format shown in the following table.
Revenue projections are set at the monthly level and summed to show projections for the quarter.
Balance Sheet Data
The balance sheet data is imported with final balances for each account per month in the format shown in the following table.
There is always a row for each account for each month in the balance sheet data.
Dynamics 365 Business Central Data
Business Central contains a product catalog that shows how products roll up to product categories, which roll up to business units.
Revenue data is provided at the date and product level. Expense data is provided at the date and department level.
Business Issues
Historically, it has taken two analysts a week to prepare the reports for the quarterly board meetings. Also, there is usually at least one issue each quarter where a value in a report is wrong because of a bad cell reference in an Excel formula. On occasion, there are conflicting results in the reports because the products and departments that roll up to each business unit are not defined consistently.
Requirements:-
1. Planned Changes -Contoso plans to automate and standardize the quarterly reporting process by using Microsoft Power BI. The company wants to how long it takes to populate reports to less than two days. The company wants to create common logic for business units, products, and departments to be used across all reports, including, but not limited, to the quarterly reporting for the board.
2. Technical Requirements -Contoso wants the reports and datasets refreshed with minimal manual effort. The company wants to provide a single package of reports to the board that contains custom navigation and links to supplementary information. Maintenance, including manually updating data and access, must be minimized as much as possible.
3. Security Requirements – The reports must be made available to the board from powerbi.com. A mail-enabled security group will be used to share information with the board. The analysts responsible for each business unit must see all the data the board sees, except the profit and loss data, which must be restricted to only their business unit’s data. The analysts must be able to build new reports from the dataset that contains the profit and loss data, but any reports that the analysts build must not be included in the quarterly reports for the board. The analysts must not be able to share the quarterly reports with anyone.
4.Report Requirements – You plan to relate the balance sheet to a standard date table in Power BI in a many-to-one relationship based on the last day of the month. At least one of the balance sheet reports in the quarterly reporting package must show the ending balances for the quarter, as well as for the previous quarter. Projections must contain a column named RevenueProjection that contains the revenue projection amounts.
The relationships between products and departments to business units must be consistent across all reports.
The board must be able to get the following information from the quarterly reports:
– Revenue trends over time
– Ending balances for each account
– A comparison of expenses versus projections by a quarter
– Changes in long-term liabilities from the previous quarter
– A comparison of quarterly revenue versus the same quarter during the prior year
Question:-
Users want a column like Apr-20 for reporting purposes. How you will create this?
Correct
Use this DAX formula for a calculated column : Column = FORMAT(Date[Date], “MMM YY”) .
Incorrect
Use this DAX formula for a calculated column : Column = FORMAT(Date[Date], “MMM YY”) .
Unattempted
Use this DAX formula for a calculated column : Column = FORMAT(Date[Date], “MMM YY”) .
Question 27 of 52
27. Question
Contoso, Ltd. is a manufacturing company that produces outdoor equipment. Contoso has quarterly board meetings for which financial analysts manually prepare Microsoft Excel reports, including profit and loss statements for each of the company’s four business units, a company balance sheet, and net income projections for the next quarter.
Data and Sources
Data for the reports come from three sources. Detailed revenue, cost, and expense data come from an Azure SQL database. Summary balance sheet data comes from Microsoft Dynamics 365 Business Central. The balance sheet data is not related to the profit and loss results, other than they both relate dates.
Monthly revenue and expense projections for the next quarter come from a Microsoft SharePoint Online list. Quarterly projections relate to the profit and loss results by using the following shared dimensions: date, business unit, department, and product category.
Net Income Projection Data
Net income projection data is stored in a SharePoint Online list named Projections in the format shown in the following table.
Revenue projections are set at the monthly level and summed to show projections for the quarter.
Balance Sheet Data
The balance sheet data is imported with final balances for each account per month in the format shown in the following table.
There is always a row for each account for each month in the balance sheet data.
Dynamics 365 Business Central Data
Business Central contains a product catalog that shows how products roll up to product categories, which roll up to business units.
Revenue data is provided at the date and product level. Expense data is provided at the date and department level.
Business Issues
Historically, it has taken two analysts a week to prepare the reports for the quarterly board meetings. Also, there is usually at least one issue each quarter where a value in a report is wrong because of a bad cell reference in an Excel formula. On occasion, there are conflicting results in the reports because the products and departments that roll up to each business unit are not defined consistently.
Requirements:-
1. Planned Changes -Contoso plans to automate and standardize the quarterly reporting process by using Microsoft Power BI. The company wants to how long it takes to populate reports to less than two days. The company wants to create common logic for business units, products, and departments to be used across all reports, including, but not limited, to the quarterly reporting for the board.
2. Technical Requirements -Contoso wants the reports and datasets refreshed with minimal manual effort. The company wants to provide a single package of reports to the board that contains custom navigation and links to supplementary information. Maintenance, including manually updating data and access, must be minimized as much as possible.
3. Security Requirements – The reports must be made available to the board from powerbi.com. A mail-enabled security group will be used to share information with the board. The analysts responsible for each business unit must see all the data the board sees, except the profit and loss data, which must be restricted to only their business unit’s data. The analysts must be able to build new reports from the dataset that contains the profit and loss data, but any reports that the analysts build must not be included in the quarterly reports for the board. The analysts must not be able to share the quarterly reports with anyone.
4.Report Requirements – You plan to relate the balance sheet to a standard date table in Power BI in a many-to-one relationship based on the last day of the month. At least one of the balance sheet reports in the quarterly reporting package must show the ending balances for the quarter, as well as for the previous quarter. Projections must contain a column named RevenueProjection that contains the revenue projection amounts.
The relationships between products and departments to business units must be consistent across all reports.
The board must be able to get the following information from the quarterly reports:
– Revenue trends over time
– Ending balances for each account
– A comparison of expenses versus projections by a quarter
– Changes in long-term liabilities from the previous quarter
– A comparison of quarterly revenue versus the same quarter during the prior year
Question:-
How you will create a custom Date Calendar?
Correct
We need to create a DAX formula to create a calendar table for reporting requirements
Incorrect
We need to create a DAX formula to create a calendar table for reporting requirements
Unattempted
We need to create a DAX formula to create a calendar table for reporting requirements
Question 28 of 52
28. Question
You use Power BI Desktop to prepare data. You examine data quality in Power Query.
You need to discover the number of errors in a column.
Which two Data preview options should you use? Each correct answer presents a complete solution.
Correct
Column quality and Column profile are the two Data preview options that show the number of errors in a column. Column quality shows the percentage of valid, error, and empty values in a column. If you hover the mouse over Column quality, the number of errors is displayed. Column profile shows statistics and the distribution of data for a column. The statistics include the number of errors.
Column distribution is an option in Power Query that shows the distribution of data for all columns. Column distribution does not show the number of errors.
Show whitespace is a Data preview option in Power Query that displays white space characters, including line feeds, in data cells. Show whitespace does not show the number of errors.
Incorrect
Column quality and Column profile are the two Data preview options that show the number of errors in a column. Column quality shows the percentage of valid, error, and empty values in a column. If you hover the mouse over Column quality, the number of errors is displayed. Column profile shows statistics and the distribution of data for a column. The statistics include the number of errors.
Column distribution is an option in Power Query that shows the distribution of data for all columns. Column distribution does not show the number of errors.
Show whitespace is a Data preview option in Power Query that displays white space characters, including line feeds, in data cells. Show whitespace does not show the number of errors.
Unattempted
Column quality and Column profile are the two Data preview options that show the number of errors in a column. Column quality shows the percentage of valid, error, and empty values in a column. If you hover the mouse over Column quality, the number of errors is displayed. Column profile shows statistics and the distribution of data for a column. The statistics include the number of errors.
Column distribution is an option in Power Query that shows the distribution of data for all columns. Column distribution does not show the number of errors.
Show whitespace is a Data preview option in Power Query that displays white space characters, including line feeds, in data cells. Show whitespace does not show the number of errors.
Question 29 of 52
29. Question
What is the use of “Select Related Table” option while importing data from SQL Server?
Correct
It will import tables that have a physical relationship with the selected table.
When you select the “Select Related Tables” option during the SQL Server Import and Export Wizard, it intelligently identifies and includes tables that are connected to the primary table through foreign key relationships. This ensures that you import not only the main table but also its dependent tables, preserving data integrity and consistency.
By selecting this option, you can streamline the import process and avoid manually selecting each related table, saving you time and effort.
Incorrect
It will import tables that have a physical relationship with the selected table.
When you select the “Select Related Tables” option during the SQL Server Import and Export Wizard, it intelligently identifies and includes tables that are connected to the primary table through foreign key relationships. This ensures that you import not only the main table but also its dependent tables, preserving data integrity and consistency.
By selecting this option, you can streamline the import process and avoid manually selecting each related table, saving you time and effort.
Unattempted
It will import tables that have a physical relationship with the selected table.
When you select the “Select Related Tables” option during the SQL Server Import and Export Wizard, it intelligently identifies and includes tables that are connected to the primary table through foreign key relationships. This ensures that you import not only the main table but also its dependent tables, preserving data integrity and consistency.
By selecting this option, you can streamline the import process and avoid manually selecting each related table, saving you time and effort.
Question 30 of 52
30. Question
You are creating a dataset and modeling the data for a Power Bl report. The report should contain up-to-date information. The tables in the data model are shown in the exhibit and are described below:
• Sales has the details of sales made to customers and has a large number of records. Many new records are added daily.
• Customer is the list of customers. Customers have a few thousand records and there are additions and updates regularly.
• Territory is the geographical area for the sales teams. Territories are rarely updated.
• Date is a date table created with DAX.
You need to select the correct storage mode to meet your company’s requirements.
Which storage mode should you use for the following requirement?
“Use Q&A in reports”
Correct
You should use the Import storage mode to be able to use Q&A in your report. Q&A features are not available if the table has a storage mode of DirectQuery. The Import storage mode allows all Power Bl service features such as Q&A to be used.
Incorrect
You should use the Import storage mode to be able to use Q&A in your report. Q&A features are not available if the table has a storage mode of DirectQuery. The Import storage mode allows all Power Bl service features such as Q&A to be used.
Unattempted
You should use the Import storage mode to be able to use Q&A in your report. Q&A features are not available if the table has a storage mode of DirectQuery. The Import storage mode allows all Power Bl service features such as Q&A to be used.
Question 31 of 52
31. Question
“Live Connection is used when connecting to multidimensional data sources, such as Analysis Services”
Select whether the above statement is True or False.
Correct
The statement “Live Connection is used when connecting to multidimensional data sources, such as Analysis Services” is TRUE.
Here’s why:
Live Connection: This technology allows a data visualization tool like Power BI to directly access and query data from a source database without copying the data into the BI tool itself. This reduces storage requirements and ensures users are always working with the latest data.
Multidimensional Data Sources: Examples include Microsoft SQL Server Analysis Services (SSAS) multidimensional models. These models offer a specific data organization format optimized for analytical workloads, often used in business intelligence (BI) and data warehousing scenarios.
Therefore, Live Connection is a suitable choice for connecting BI tools like Power BI to multidimensional data sources like SSAS, enabling real-time data access and analysis without data duplication.
The statement “Live Connection is used when connecting to multidimensional data sources, such as Analysis Services” is TRUE.
Here’s why:
Live Connection: This technology allows a data visualization tool like Power BI to directly access and query data from a source database without copying the data into the BI tool itself. This reduces storage requirements and ensures users are always working with the latest data.
Multidimensional Data Sources: Examples include Microsoft SQL Server Analysis Services (SSAS) multidimensional models. These models offer a specific data organization format optimized for analytical workloads, often used in business intelligence (BI) and data warehousing scenarios.
Therefore, Live Connection is a suitable choice for connecting BI tools like Power BI to multidimensional data sources like SSAS, enabling real-time data access and analysis without data duplication.
The statement “Live Connection is used when connecting to multidimensional data sources, such as Analysis Services” is TRUE.
Here’s why:
Live Connection: This technology allows a data visualization tool like Power BI to directly access and query data from a source database without copying the data into the BI tool itself. This reduces storage requirements and ensures users are always working with the latest data.
Multidimensional Data Sources: Examples include Microsoft SQL Server Analysis Services (SSAS) multidimensional models. These models offer a specific data organization format optimized for analytical workloads, often used in business intelligence (BI) and data warehousing scenarios.
Therefore, Live Connection is a suitable choice for connecting BI tools like Power BI to multidimensional data sources like SSAS, enabling real-time data access and analysis without data duplication.
You plan to use dataflows to perform extract, transform, and load processing on data stored in the Common Data Service to prepare it for Power Bl reports.
You need to set up the dataflows to process and store data. Which components should you use for the following process?
“Store data in dataflows using….”
Correct
You can store dataflow data in Common Data Model (CDM) folders A CDM folder contains one or more CSV files for each entity, plus a JSON metadata file. Dataflows can also populate Common Data Service entities.
You cannot store dataflow data in JSON files. Dataflows create CSV files. A JSON file is used to describe the metadata for the entities in a dataflow.
You cannot store dataflow data in OneDrive for Business. The OneDrive for Business that is associated with a Power Bl workspace is not used by dataflows.
Incorrect
You can store dataflow data in Common Data Model (CDM) folders A CDM folder contains one or more CSV files for each entity, plus a JSON metadata file. Dataflows can also populate Common Data Service entities.
You cannot store dataflow data in JSON files. Dataflows create CSV files. A JSON file is used to describe the metadata for the entities in a dataflow.
You cannot store dataflow data in OneDrive for Business. The OneDrive for Business that is associated with a Power Bl workspace is not used by dataflows.
Unattempted
You can store dataflow data in Common Data Model (CDM) folders A CDM folder contains one or more CSV files for each entity, plus a JSON metadata file. Dataflows can also populate Common Data Service entities.
You cannot store dataflow data in JSON files. Dataflows create CSV files. A JSON file is used to describe the metadata for the entities in a dataflow.
You cannot store dataflow data in OneDrive for Business. The OneDrive for Business that is associated with a Power Bl workspace is not used by dataflows.
Question 33 of 52
33. Question
You import an HR dataset into Power Bl Desktop.
You need to see how many empty and error rows are in a dataset, as shown below.
Which two data quality options can you use to meet your goal? Each correct answer presents a complete solution.
Correct
You can use the Column quality and Column profile options to check empty or error values. Column quality allows you to analyze valid, error, or empty values for all columns in a single view. Column profile allows you to analyze value distribution along with empty or error values for the selected column.
You should not use the Column distribution option to check empty or error values. Column distribution allows you to show distinct and unique values for all columns in a single view.
You should not use the Custom column option to check empty or error values. The custom column option allows you to create a new column from Power Query editor either by using an example or providing a column formula.
Incorrect
You can use the Column quality and Column profile options to check empty or error values. Column quality allows you to analyze valid, error, or empty values for all columns in a single view. Column profile allows you to analyze value distribution along with empty or error values for the selected column.
You should not use the Column distribution option to check empty or error values. Column distribution allows you to show distinct and unique values for all columns in a single view.
You should not use the Custom column option to check empty or error values. The custom column option allows you to create a new column from Power Query editor either by using an example or providing a column formula.
Unattempted
You can use the Column quality and Column profile options to check empty or error values. Column quality allows you to analyze valid, error, or empty values for all columns in a single view. Column profile allows you to analyze value distribution along with empty or error values for the selected column.
You should not use the Column distribution option to check empty or error values. Column distribution allows you to show distinct and unique values for all columns in a single view.
You should not use the Custom column option to check empty or error values. The custom column option allows you to create a new column from Power Query editor either by using an example or providing a column formula.
Question 34 of 52
34. Question
You create a Power Bl report using an Excel workbook stored on OneDrive for Business as the data source in Power Bl Desktop.
You need to replace the Excel workbook with a new version that contains updated data.
For the following statement(s), select Yes if the statement is true. Otherwise, select No.
“The Excel workbook must have the same structure as the original workbook.”
Correct
The Excel workbook must have the same structure as the original file. If the columns are different either with more columns, fewer columns, or differently named columns, this will break the report.
Incorrect
The Excel workbook must have the same structure as the original file. If the columns are different either with more columns, fewer columns, or differently named columns, this will break the report.
Unattempted
The Excel workbook must have the same structure as the original file. If the columns are different either with more columns, fewer columns, or differently named columns, this will break the report.
Question 35 of 52
35. Question
You have Employee and Salary tables in a data model. You need to include employee salaries in the Employee table, as shown in the below images.
Before transformation:
After transformation:
Which transformation should you use? Choose the correct answer
Correct
You should use the Merge queries transformation to include employee salaries in the employee table. This transformation allows you to merge two or more tables into an existing table based on a column that is common between all participating tables. This is like a join in SQL.
You should not use the Append queries transformation. This transformation allows you to combine two or more tables into an existing table If all participating tables have the same schema. This is like a union in SQL.
You should not use the Append queries as new transformation. This transformation allows you to combine two or more tables into a new table if all participating tables have the same schema. This is like a union in SQL.
You should not use the Merge queries as new transformation. This transformation allows you to merge two or more tables into a new table based on a column that is common between all participating tables. This is like a join in SQL.
Incorrect
You should use the Merge queries transformation to include employee salaries in the employee table. This transformation allows you to merge two or more tables into an existing table based on a column that is common between all participating tables. This is like a join in SQL.
You should not use the Append queries transformation. This transformation allows you to combine two or more tables into an existing table If all participating tables have the same schema. This is like a union in SQL.
You should not use the Append queries as new transformation. This transformation allows you to combine two or more tables into a new table if all participating tables have the same schema. This is like a union in SQL.
You should not use the Merge queries as new transformation. This transformation allows you to merge two or more tables into a new table based on a column that is common between all participating tables. This is like a join in SQL.
Unattempted
You should use the Merge queries transformation to include employee salaries in the employee table. This transformation allows you to merge two or more tables into an existing table based on a column that is common between all participating tables. This is like a join in SQL.
You should not use the Append queries transformation. This transformation allows you to combine two or more tables into an existing table If all participating tables have the same schema. This is like a union in SQL.
You should not use the Append queries as new transformation. This transformation allows you to combine two or more tables into a new table if all participating tables have the same schema. This is like a union in SQL.
You should not use the Merge queries as new transformation. This transformation allows you to merge two or more tables into a new table based on a column that is common between all participating tables. This is like a join in SQL.
Question 36 of 52
36. Question
Your company moved from a self-service approach to an increasingly centralized approach. The IT team created a SQL Server Analysis Server (SSAS) for you to use.
When using live connections for SSAS, why might the Relationships view disappear in Power BI?
Correct
The most likely reason the Relationships view might disappear in Power BI when using live connections for SSAS is:
C. The model’s relationships are managed in SSAS, not in Power BI Desktop.
Here’s why:
Live connections: When establishing a live connection to an SSAS model, Power BI Desktop acts as a front-end tool for querying and visualizing data. The underlying data and relationships are still managed within the SSAS model itself.
No editing in Power BI Desktop: Since the relationships are defined in SSAS, Power BI Desktop does not allow editing them directly. Consequently, the Relationships view might not be displayed to avoid confusion or accidental modifications that could conflict with the SSAS model.
Alternative options: To manage relationships, you’ll need to access the SSAS model management tools (e.g., SQL Server Management Studio or dedicated SSAS management tools) where you can view and edit existing relationships.
While other options might contribute to specific scenarios, they are not the primary reason for the Relationships view disappearing in this context:
A. Hidden by default: While it’s possible to hide the Relationships view, it’s usually accessible in Power BI Desktop and wouldn’t automatically disappear solely due to a live SSAS connection.
B. SQL Server data: The Relationships view is available for data from various sources, including SSAS, not just SQL Server.
D. Automatic detection: While live connections might attempt to infer relationships based on metadata, they primarily rely on pre-defined relationships in the source data model (SSAS in this case).
E. Cross-table formulas: Live connections support cross-table formulas as long as the required relationships exist in the underlying data model.
Additional Considerations:
Some third-party tools or custom solutions might offer workarounds to view or manage relationships indirectly within Power BI Desktop when using live connections to SSAS. However, these are not standard functionalities and require additional setup and potentially technical expertise.
By understanding that relationships are managed in SSAS for live connections, you can adjust your approach for viewing and modifying relationships accordingly.
Incorrect
The most likely reason the Relationships view might disappear in Power BI when using live connections for SSAS is:
C. The model’s relationships are managed in SSAS, not in Power BI Desktop.
Here’s why:
Live connections: When establishing a live connection to an SSAS model, Power BI Desktop acts as a front-end tool for querying and visualizing data. The underlying data and relationships are still managed within the SSAS model itself.
No editing in Power BI Desktop: Since the relationships are defined in SSAS, Power BI Desktop does not allow editing them directly. Consequently, the Relationships view might not be displayed to avoid confusion or accidental modifications that could conflict with the SSAS model.
Alternative options: To manage relationships, you’ll need to access the SSAS model management tools (e.g., SQL Server Management Studio or dedicated SSAS management tools) where you can view and edit existing relationships.
While other options might contribute to specific scenarios, they are not the primary reason for the Relationships view disappearing in this context:
A. Hidden by default: While it’s possible to hide the Relationships view, it’s usually accessible in Power BI Desktop and wouldn’t automatically disappear solely due to a live SSAS connection.
B. SQL Server data: The Relationships view is available for data from various sources, including SSAS, not just SQL Server.
D. Automatic detection: While live connections might attempt to infer relationships based on metadata, they primarily rely on pre-defined relationships in the source data model (SSAS in this case).
E. Cross-table formulas: Live connections support cross-table formulas as long as the required relationships exist in the underlying data model.
Additional Considerations:
Some third-party tools or custom solutions might offer workarounds to view or manage relationships indirectly within Power BI Desktop when using live connections to SSAS. However, these are not standard functionalities and require additional setup and potentially technical expertise.
By understanding that relationships are managed in SSAS for live connections, you can adjust your approach for viewing and modifying relationships accordingly.
Unattempted
The most likely reason the Relationships view might disappear in Power BI when using live connections for SSAS is:
C. The model’s relationships are managed in SSAS, not in Power BI Desktop.
Here’s why:
Live connections: When establishing a live connection to an SSAS model, Power BI Desktop acts as a front-end tool for querying and visualizing data. The underlying data and relationships are still managed within the SSAS model itself.
No editing in Power BI Desktop: Since the relationships are defined in SSAS, Power BI Desktop does not allow editing them directly. Consequently, the Relationships view might not be displayed to avoid confusion or accidental modifications that could conflict with the SSAS model.
Alternative options: To manage relationships, you’ll need to access the SSAS model management tools (e.g., SQL Server Management Studio or dedicated SSAS management tools) where you can view and edit existing relationships.
While other options might contribute to specific scenarios, they are not the primary reason for the Relationships view disappearing in this context:
A. Hidden by default: While it’s possible to hide the Relationships view, it’s usually accessible in Power BI Desktop and wouldn’t automatically disappear solely due to a live SSAS connection.
B. SQL Server data: The Relationships view is available for data from various sources, including SSAS, not just SQL Server.
D. Automatic detection: While live connections might attempt to infer relationships based on metadata, they primarily rely on pre-defined relationships in the source data model (SSAS in this case).
E. Cross-table formulas: Live connections support cross-table formulas as long as the required relationships exist in the underlying data model.
Additional Considerations:
Some third-party tools or custom solutions might offer workarounds to view or manage relationships indirectly within Power BI Desktop when using live connections to SSAS. However, these are not standard functionalities and require additional setup and potentially technical expertise.
By understanding that relationships are managed in SSAS for live connections, you can adjust your approach for viewing and modifying relationships accordingly.
Question 37 of 52
37. Question
Select the feature that allows you to navigate through HTML tags to get the data from a website?
Correct
Web Scraping is the new feature of Power bi that allows navigating through HTML tags to get data from the website.
Incorrect
Web Scraping is the new feature of Power bi that allows navigating through HTML tags to get data from the website.
Unattempted
Web Scraping is the new feature of Power bi that allows navigating through HTML tags to get data from the website.
Question 38 of 52
38. Question
“You plan to use dataflows to perform extract, transform, and load processing on data stored in the Common Data Service to prepare it for Power Bl reports.
You need to set up the dataflows to process and store data.
Which components should you use for the following process?
”
“Store dataflows using….”
Correct
You should store dataflows in Azure Data Lake Gen2 storage. Azure Data Lake can either be managed by the Power BI service or an Azure Data Lake that you create In Azure.
Dataflows can extract data from Azure SQL Server and the Common Data Service. Power Bl datasets can be populated from a dataflow.
Incorrect
You should store dataflows in Azure Data Lake Gen2 storage. Azure Data Lake can either be managed by the Power BI service or an Azure Data Lake that you create In Azure.
Dataflows can extract data from Azure SQL Server and the Common Data Service. Power Bl datasets can be populated from a dataflow.
Unattempted
You should store dataflows in Azure Data Lake Gen2 storage. Azure Data Lake can either be managed by the Power BI service or an Azure Data Lake that you create In Azure.
Dataflows can extract data from Azure SQL Server and the Common Data Service. Power Bl datasets can be populated from a dataflow.
Question 39 of 52
39. Question
You are a data analyst using Power Bl with Power Query to connect and get the data.
You use the Common Data Service as a data source.
You need to investigate query performance issues, but the View Native Query option is disabled.
What is a possible cause for the View Native Query option being disabled?
Correct
The View Native Query option in Power Query is disabled if Query folding is not possible for the data source and transform steps you have specified. Query folding improves performance by using native queries against the data source. The Common Data Service does not support Query folding. The View Native Query option will always be disabled when using the Common Data Service connector.
Some transformations will prevent query folding. Adding a conditional column as a step to Query Settings in Power Query will not prevent query folding.
Defining measures in Power BI does not affect query folding. Measures are typically aggregations and are calculated at the time of the request.
The Import storage mode can achieve query folding for relational data sources. Using the Import storage mode will not disable the option.
Incorrect
The View Native Query option in Power Query is disabled if Query folding is not possible for the data source and transform steps you have specified. Query folding improves performance by using native queries against the data source. The Common Data Service does not support Query folding. The View Native Query option will always be disabled when using the Common Data Service connector.
Some transformations will prevent query folding. Adding a conditional column as a step to Query Settings in Power Query will not prevent query folding.
Defining measures in Power BI does not affect query folding. Measures are typically aggregations and are calculated at the time of the request.
The Import storage mode can achieve query folding for relational data sources. Using the Import storage mode will not disable the option.
Unattempted
The View Native Query option in Power Query is disabled if Query folding is not possible for the data source and transform steps you have specified. Query folding improves performance by using native queries against the data source. The Common Data Service does not support Query folding. The View Native Query option will always be disabled when using the Common Data Service connector.
Some transformations will prevent query folding. Adding a conditional column as a step to Query Settings in Power Query will not prevent query folding.
Defining measures in Power BI does not affect query folding. Measures are typically aggregations and are calculated at the time of the request.
The Import storage mode can achieve query folding for relational data sources. Using the Import storage mode will not disable the option.
Question 40 of 52
40. Question
You are a business intelligence analyst for a worldwide organization.
You create a Power Bl report that shows sales for your organization.
You need to provide a version of the report that filters the data for a single country.
Your Solution:
1) Add a report-level filter.
2) Share the report.
3) Check the Share report with the current filters and slicers option.
Does this solution meet the goal?
Correct
This solution meets the goal. The Share report with current filters and slicers option will share the filtered version of the report. This sharing option creates a bookmark to filter the report data. The URL emailed when you share the report includes the bookmark.
The user will have access to the underlying dataset and so may be able to view data for other countries. If you need to restrict access to data from other countries, you should implement Row Level Security (RLS).
Incorrect
This solution meets the goal. The Share report with current filters and slicers option will share the filtered version of the report. This sharing option creates a bookmark to filter the report data. The URL emailed when you share the report includes the bookmark.
The user will have access to the underlying dataset and so may be able to view data for other countries. If you need to restrict access to data from other countries, you should implement Row Level Security (RLS).
Unattempted
This solution meets the goal. The Share report with current filters and slicers option will share the filtered version of the report. This sharing option creates a bookmark to filter the report data. The URL emailed when you share the report includes the bookmark.
The user will have access to the underlying dataset and so may be able to view data for other countries. If you need to restrict access to data from other countries, you should implement Row Level Security (RLS).
Question 41 of 52
41. Question
You are creating a dataset and modeling the data for a Power Bl report. The report should contain up-to-date information. The tables in the data model are shown in the exhibit and are described below:
• Sales has the details of sales made to customers and has a large number of records. Many new records are added daily.
• Customer is the list of customers. Customers have a few thousand records and there are additions and updates regularly.
• Territory is the geographical area for the sales teams. Territories are rarely updated.
• Date is a date table created with DAX.
You need to select the correct storage mode to meet your company’s requirements.
Which storage mode should you use for the following requirement?
“Use Customer and Territory as slicers”
Correct
You should use the Dual storage mode for slicers on the Customer and Territory tables. Slicers filter the data on the report page. If there are simple relationships in the model, Dual-mode can improve performance using either caching with the Import storage mode or executing a join relationship with the DirectQuery storage mode. The Dual storage mode allows Power Bl to use either Import or Direct Query depending on the visualizations on the report.
Incorrect
You should use the Dual storage mode for slicers on the Customer and Territory tables. Slicers filter the data on the report page. If there are simple relationships in the model, Dual-mode can improve performance using either caching with the Import storage mode or executing a join relationship with the DirectQuery storage mode. The Dual storage mode allows Power Bl to use either Import or Direct Query depending on the visualizations on the report.
Unattempted
You should use the Dual storage mode for slicers on the Customer and Territory tables. Slicers filter the data on the report page. If there are simple relationships in the model, Dual-mode can improve performance using either caching with the Import storage mode or executing a join relationship with the DirectQuery storage mode. The Dual storage mode allows Power Bl to use either Import or Direct Query depending on the visualizations on the report.
Question 42 of 52
42. Question
You should create a report that shows the company‘s sales performance. The sales team asked you to add a single visualization that could help identify the factors that affect sales and influence success. Which of the following visualization would you use?
Correct
The key influencers visual helps you understand the factors that drive a metric you‘re interested in. It analyzes your data, ranks the factors that matter, and displays them as key influencers. For example, suppose you want to figure out what influences employee turnover, which is also known as churn. One factor might be employment contract length, and another factor might be commute time.
The key influencers visual is a great choice if you want to:
– See which factors affect the metric being analyzed.
– Contrast the relative importance of these factors. For example, do short-term contracts affect churn more than long-term contracts?
Features of the key influencers visual
1. Tabs: Select a tab to switch between views. Key influencers shows you the top contributors to the selected metric value. Top segments shows you the top segments that contribute to the selected metric value. A segment is made up of a combination of values. For example, one segment might be consumers who have been customers for at least 20 years and live in the west region.
2. Drop-down box: The value of the metric under investigation. In this example, look at the metric Rating. The selected value is Low.
3. Restatement: It helps you interpret the visual in the left pane.
4. Left pane: The left pane contains one visual. In this case, the left pane shows a list of the top key influencers.
5. Restatement: It helps you interpret the visual in the right pane.
6. Right pane: The right pane contains one visual. In this case, the column chart displays all the values for the key influencer Theme that was selected in the left pane. The specific value of usability from the left pane is shown in green. All the other values for Theme are shown in black.
7. Average line: The average is calculated for all possible values for Theme except usability (which is the selected influencer). So the calculation applies to all the values in black. It tells you what percentage of the other Themeshad a low rating. In this case 11.35% had a low rating (shown by the dotted line).
8. Check box: Filters out the visual in the right pane to only show values that are influencers for that field. In this example, this would filter the visual to usability, security, and navigation. https://docs.microsoft.com/en-us/power-bi/visuals/power-bi-visualization-influencers
Incorrect
The key influencers visual helps you understand the factors that drive a metric you‘re interested in. It analyzes your data, ranks the factors that matter, and displays them as key influencers. For example, suppose you want to figure out what influences employee turnover, which is also known as churn. One factor might be employment contract length, and another factor might be commute time.
The key influencers visual is a great choice if you want to:
– See which factors affect the metric being analyzed.
– Contrast the relative importance of these factors. For example, do short-term contracts affect churn more than long-term contracts?
Features of the key influencers visual
1. Tabs: Select a tab to switch between views. Key influencers shows you the top contributors to the selected metric value. Top segments shows you the top segments that contribute to the selected metric value. A segment is made up of a combination of values. For example, one segment might be consumers who have been customers for at least 20 years and live in the west region.
2. Drop-down box: The value of the metric under investigation. In this example, look at the metric Rating. The selected value is Low.
3. Restatement: It helps you interpret the visual in the left pane.
4. Left pane: The left pane contains one visual. In this case, the left pane shows a list of the top key influencers.
5. Restatement: It helps you interpret the visual in the right pane.
6. Right pane: The right pane contains one visual. In this case, the column chart displays all the values for the key influencer Theme that was selected in the left pane. The specific value of usability from the left pane is shown in green. All the other values for Theme are shown in black.
7. Average line: The average is calculated for all possible values for Theme except usability (which is the selected influencer). So the calculation applies to all the values in black. It tells you what percentage of the other Themeshad a low rating. In this case 11.35% had a low rating (shown by the dotted line).
8. Check box: Filters out the visual in the right pane to only show values that are influencers for that field. In this example, this would filter the visual to usability, security, and navigation. https://docs.microsoft.com/en-us/power-bi/visuals/power-bi-visualization-influencers
Unattempted
The key influencers visual helps you understand the factors that drive a metric you‘re interested in. It analyzes your data, ranks the factors that matter, and displays them as key influencers. For example, suppose you want to figure out what influences employee turnover, which is also known as churn. One factor might be employment contract length, and another factor might be commute time.
The key influencers visual is a great choice if you want to:
– See which factors affect the metric being analyzed.
– Contrast the relative importance of these factors. For example, do short-term contracts affect churn more than long-term contracts?
Features of the key influencers visual
1. Tabs: Select a tab to switch between views. Key influencers shows you the top contributors to the selected metric value. Top segments shows you the top segments that contribute to the selected metric value. A segment is made up of a combination of values. For example, one segment might be consumers who have been customers for at least 20 years and live in the west region.
2. Drop-down box: The value of the metric under investigation. In this example, look at the metric Rating. The selected value is Low.
3. Restatement: It helps you interpret the visual in the left pane.
4. Left pane: The left pane contains one visual. In this case, the left pane shows a list of the top key influencers.
5. Restatement: It helps you interpret the visual in the right pane.
6. Right pane: The right pane contains one visual. In this case, the column chart displays all the values for the key influencer Theme that was selected in the left pane. The specific value of usability from the left pane is shown in green. All the other values for Theme are shown in black.
7. Average line: The average is calculated for all possible values for Theme except usability (which is the selected influencer). So the calculation applies to all the values in black. It tells you what percentage of the other Themeshad a low rating. In this case 11.35% had a low rating (shown by the dotted line).
8. Check box: Filters out the visual in the right pane to only show values that are influencers for that field. In this example, this would filter the visual to usability, security, and navigation. https://docs.microsoft.com/en-us/power-bi/visuals/power-bi-visualization-influencers
Question 43 of 52
43. Question
You have two tables as shown in the exhibit. There are no unique values in either table. The tables are related by the product name.
You need to create relationships in the data model to enable visuals to be created that contain data from both tables.
How should you model these tables?
Correct
You should set the relationship cardinality to many-to-many between the two tables. Power Bl supports many-to-many relationships. Many-to-many cardinality does not require unique values in either of the tables in the relationship.
You should not create a table that contains the unique ids and create two one-to-many relationships. In previous versions of Power Bl, many-to-many relationships were not supported and you needed to create a table to intersect the other two tables. You should not create a third table because this adds to the size of the dataset and the processing time when refreshing it.
You should not set the relationship cardinality to one-to-many between the two tables. One-to-many cardinality requires unique values on one side of the relationship. There are no unique rows in either table. One-to-many is the most common type of cardinality, and Power Bl will default to this cardinality. A one-to-many relationship will provide better performance.
You should not set the relationship cardinality to one-to-one between the two tables. One-to-one cardinality requires unique values on both sides of the relationships.
Incorrect
You should set the relationship cardinality to many-to-many between the two tables. Power Bl supports many-to-many relationships. Many-to-many cardinality does not require unique values in either of the tables in the relationship.
You should not create a table that contains the unique ids and create two one-to-many relationships. In previous versions of Power Bl, many-to-many relationships were not supported and you needed to create a table to intersect the other two tables. You should not create a third table because this adds to the size of the dataset and the processing time when refreshing it.
You should not set the relationship cardinality to one-to-many between the two tables. One-to-many cardinality requires unique values on one side of the relationship. There are no unique rows in either table. One-to-many is the most common type of cardinality, and Power Bl will default to this cardinality. A one-to-many relationship will provide better performance.
You should not set the relationship cardinality to one-to-one between the two tables. One-to-one cardinality requires unique values on both sides of the relationships.
Unattempted
You should set the relationship cardinality to many-to-many between the two tables. Power Bl supports many-to-many relationships. Many-to-many cardinality does not require unique values in either of the tables in the relationship.
You should not create a table that contains the unique ids and create two one-to-many relationships. In previous versions of Power Bl, many-to-many relationships were not supported and you needed to create a table to intersect the other two tables. You should not create a third table because this adds to the size of the dataset and the processing time when refreshing it.
You should not set the relationship cardinality to one-to-many between the two tables. One-to-many cardinality requires unique values on one side of the relationship. There are no unique rows in either table. One-to-many is the most common type of cardinality, and Power Bl will default to this cardinality. A one-to-many relationship will provide better performance.
You should not set the relationship cardinality to one-to-one between the two tables. One-to-one cardinality requires unique values on both sides of the relationships.
Question 44 of 52
44. Question
You have a very large dataset, and you want to reduce its size. A table called Telemetry contains multiple records per day with the datetime, deviceid, devicetype, readingtype, and readingvalue.
You need to produce reports showing average values for devicetype and readingtype. The Telemetry table is set to DirectQuery storage mode. You need to create aggregations for date, devicetype, readingtype, and readingvalue.
Which of the following five actions should you perform in sequence?
Correct
You should perform the following actions in order:
1. Create a new table named Telemetry_Aggregate containing the columns to hold aggregate data.
2. Select the Telemetry table and select Manage aggregations.
3. Select the Telemetry_Aggregate table as the aggregation table.
4. Select the Summarization type.
5. Select the table and column from the Telemetry table.
First, you should create a new table named Telemetry_Aggregate containing the date, devicetype, readingtype, and readingvalue columns to hold aggregate data. You must create this table before you can specify the aggregations.
Next, you must select the table you want to aggregate, Telemetry, and select Manage aggregations. This will open the Manage aggregations dialog window.
Then, you must select the table to aggregate into, Telemetry_Aggregate. This will then display the columns in that table.
Finally, for each column in the aggregate table, you should select the Summarization Type (Group By, Count, Sum, Average) and then select the Telemetry table as the Detail Table and then the column from the Telemetry table as Detail column. When finished, click on Apply All to create the aggregations.
You should not set the storage mode for the Telemetry table to Import. The detail table’s storage mode must be set to DirectQuery.
You should not select the Telemetry table as the aggregation table. The Telemetry table is the detail table, not the aggregation table.
You should not select the Telemetry_Aggregate table and select Manage aggregations. You should manage aggregations for the detail table, Telemetry, not the aggregation table, Telemetry_Aggregate.
Incorrect
You should perform the following actions in order:
1. Create a new table named Telemetry_Aggregate containing the columns to hold aggregate data.
2. Select the Telemetry table and select Manage aggregations.
3. Select the Telemetry_Aggregate table as the aggregation table.
4. Select the Summarization type.
5. Select the table and column from the Telemetry table.
First, you should create a new table named Telemetry_Aggregate containing the date, devicetype, readingtype, and readingvalue columns to hold aggregate data. You must create this table before you can specify the aggregations.
Next, you must select the table you want to aggregate, Telemetry, and select Manage aggregations. This will open the Manage aggregations dialog window.
Then, you must select the table to aggregate into, Telemetry_Aggregate. This will then display the columns in that table.
Finally, for each column in the aggregate table, you should select the Summarization Type (Group By, Count, Sum, Average) and then select the Telemetry table as the Detail Table and then the column from the Telemetry table as Detail column. When finished, click on Apply All to create the aggregations.
You should not set the storage mode for the Telemetry table to Import. The detail table’s storage mode must be set to DirectQuery.
You should not select the Telemetry table as the aggregation table. The Telemetry table is the detail table, not the aggregation table.
You should not select the Telemetry_Aggregate table and select Manage aggregations. You should manage aggregations for the detail table, Telemetry, not the aggregation table, Telemetry_Aggregate.
Unattempted
You should perform the following actions in order:
1. Create a new table named Telemetry_Aggregate containing the columns to hold aggregate data.
2. Select the Telemetry table and select Manage aggregations.
3. Select the Telemetry_Aggregate table as the aggregation table.
4. Select the Summarization type.
5. Select the table and column from the Telemetry table.
First, you should create a new table named Telemetry_Aggregate containing the date, devicetype, readingtype, and readingvalue columns to hold aggregate data. You must create this table before you can specify the aggregations.
Next, you must select the table you want to aggregate, Telemetry, and select Manage aggregations. This will open the Manage aggregations dialog window.
Then, you must select the table to aggregate into, Telemetry_Aggregate. This will then display the columns in that table.
Finally, for each column in the aggregate table, you should select the Summarization Type (Group By, Count, Sum, Average) and then select the Telemetry table as the Detail Table and then the column from the Telemetry table as Detail column. When finished, click on Apply All to create the aggregations.
You should not set the storage mode for the Telemetry table to Import. The detail table’s storage mode must be set to DirectQuery.
You should not select the Telemetry table as the aggregation table. The Telemetry table is the detail table, not the aggregation table.
You should not select the Telemetry_Aggregate table and select Manage aggregations. You should manage aggregations for the detail table, Telemetry, not the aggregation table, Telemetry_Aggregate.
Question 45 of 52
45. Question
You are modeling sales data as shown in the below exhibit.
You need to show the variance of sales for the different product types. You decide to use a Quick measure.
How should you configure the Quick measure? Select the correct column names for the Quick Measure setup.
Correct
You should use Amount from the Sales table as the Base value. The Base Value is the column you want to aggregate.
You should use the TypeName from the Product Type table as the Category. The Category is what you will group the variance by.
You should not use the Order Number column. Order Number is the unique reference for the Sales table and is not required for the variance.
Incorrect
You should use Amount from the Sales table as the Base value. The Base Value is the column you want to aggregate.
You should use the TypeName from the Product Type table as the Category. The Category is what you will group the variance by.
You should not use the Order Number column. Order Number is the unique reference for the Sales table and is not required for the variance.
Unattempted
You should use Amount from the Sales table as the Base value. The Base Value is the column you want to aggregate.
You should use the TypeName from the Product Type table as the Category. The Category is what you will group the variance by.
You should not use the Order Number column. Order Number is the unique reference for the Sales table and is not required for the variance.
Question 46 of 52
46. Question
How can you automate the deployment and maintenance of your assets?
Correct
“Proposition: Use Azure DevOps. Explanation: Azure DevOps is a comprehensive set of tools that can be used to automate the deployment and maintenance of assets. It provides a range of features such as continuous integration, continuous delivery, and continuous deployment, which can help streamline the entire process. With Azure DevOps, you can easily create pipelines that automate the build, test, and deployment of your assets. It also provides a range of monitoring and management tools that can help you keep track of your assets and ensure they are running smoothly. Proposition: Use GitHub Actions. Explanation: GitHub Actions is a powerful tool that can be used to automate the deployment and maintenance of assets. It provides a range of features such as continuous integration, continuous delivery, and continuous deployment, which can help streamline the entire process. With GitHub Actions, you can easily create workflows that automate the build, test, and deployment of your assets. It also provides a range of monitoring and management tools that can help you keep track of your assets and ensure they are running smoothly. Proposition: Use Jenkins. Explanation: Jenkins is a popular open-source tool that can be used to automate the deployment and maintenance of assets. It provides a range of features such as continuous integration, continuous delivery, and continuous deployment, which can help streamline the entire process. With Jenkins, you can easily create pipelines that automate the build, test, and deployment of your assets. It also provides a range of monitoring and management tools that can help you keep track of your assets and ensure they are running smoothly. Proposition: Use Travis CI. Explanation: Travis CI is a cloud-based tool that can be used to automate the deployment and maintenance of assets. It provides a range of features such as continuous integration, continuous delivery, and continuous deployment, which can help streamline the entire process. With Travis CI, you can easily create pipelines that automate the build, test, and deployment of your assets. It also provides a range of monitoring and management tools that can help you keep track of your assets and ensure they are running smoothly. However, Travis CI is primarily designed for open-source projects and may not be the best choice for enterprise-level deployments.“
Incorrect
“Proposition: Use Azure DevOps. Explanation: Azure DevOps is a comprehensive set of tools that can be used to automate the deployment and maintenance of assets. It provides a range of features such as continuous integration, continuous delivery, and continuous deployment, which can help streamline the entire process. With Azure DevOps, you can easily create pipelines that automate the build, test, and deployment of your assets. It also provides a range of monitoring and management tools that can help you keep track of your assets and ensure they are running smoothly. Proposition: Use GitHub Actions. Explanation: GitHub Actions is a powerful tool that can be used to automate the deployment and maintenance of assets. It provides a range of features such as continuous integration, continuous delivery, and continuous deployment, which can help streamline the entire process. With GitHub Actions, you can easily create workflows that automate the build, test, and deployment of your assets. It also provides a range of monitoring and management tools that can help you keep track of your assets and ensure they are running smoothly. Proposition: Use Jenkins. Explanation: Jenkins is a popular open-source tool that can be used to automate the deployment and maintenance of assets. It provides a range of features such as continuous integration, continuous delivery, and continuous deployment, which can help streamline the entire process. With Jenkins, you can easily create pipelines that automate the build, test, and deployment of your assets. It also provides a range of monitoring and management tools that can help you keep track of your assets and ensure they are running smoothly. Proposition: Use Travis CI. Explanation: Travis CI is a cloud-based tool that can be used to automate the deployment and maintenance of assets. It provides a range of features such as continuous integration, continuous delivery, and continuous deployment, which can help streamline the entire process. With Travis CI, you can easily create pipelines that automate the build, test, and deployment of your assets. It also provides a range of monitoring and management tools that can help you keep track of your assets and ensure they are running smoothly. However, Travis CI is primarily designed for open-source projects and may not be the best choice for enterprise-level deployments.“
Unattempted
“Proposition: Use Azure DevOps. Explanation: Azure DevOps is a comprehensive set of tools that can be used to automate the deployment and maintenance of assets. It provides a range of features such as continuous integration, continuous delivery, and continuous deployment, which can help streamline the entire process. With Azure DevOps, you can easily create pipelines that automate the build, test, and deployment of your assets. It also provides a range of monitoring and management tools that can help you keep track of your assets and ensure they are running smoothly. Proposition: Use GitHub Actions. Explanation: GitHub Actions is a powerful tool that can be used to automate the deployment and maintenance of assets. It provides a range of features such as continuous integration, continuous delivery, and continuous deployment, which can help streamline the entire process. With GitHub Actions, you can easily create workflows that automate the build, test, and deployment of your assets. It also provides a range of monitoring and management tools that can help you keep track of your assets and ensure they are running smoothly. Proposition: Use Jenkins. Explanation: Jenkins is a popular open-source tool that can be used to automate the deployment and maintenance of assets. It provides a range of features such as continuous integration, continuous delivery, and continuous deployment, which can help streamline the entire process. With Jenkins, you can easily create pipelines that automate the build, test, and deployment of your assets. It also provides a range of monitoring and management tools that can help you keep track of your assets and ensure they are running smoothly. Proposition: Use Travis CI. Explanation: Travis CI is a cloud-based tool that can be used to automate the deployment and maintenance of assets. It provides a range of features such as continuous integration, continuous delivery, and continuous deployment, which can help streamline the entire process. With Travis CI, you can easily create pipelines that automate the build, test, and deployment of your assets. It also provides a range of monitoring and management tools that can help you keep track of your assets and ensure they are running smoothly. However, Travis CI is primarily designed for open-source projects and may not be the best choice for enterprise-level deployments.“
Question 47 of 52
47. Question
You have a Power BI dashboard that monitors the quality of manufacturing processes. The dashboard contains the following elements:✑ A line chart that shows the number of defective products manufactured by day✑ A KPI visual that shows the current daily percentage of defective products manufacturedYou need to be notified when the daily percentage of defective products manufactured exceeds 3%.What should you create?
Correct
“B. An alert. Explanation: An alert is the most suitable option for this situation as it allows you to set a specific threshold for the daily percentage of defective products and receive a notification when that threshold is exceeded. This will allow you to take immediate action to address the issue and improve the quality of the manufacturing process. A subscription is not the best option as it simply sends a regular email with the dashboard content, but does not provide a specific notification when the daily percentage of defective products exceeds 3%. A smart narrative visual and a Q&A visual are not relevant in this situation as they do not provide a way to set a specific threshold or receive notifications when that threshold is exceeded.“
Incorrect
“B. An alert. Explanation: An alert is the most suitable option for this situation as it allows you to set a specific threshold for the daily percentage of defective products and receive a notification when that threshold is exceeded. This will allow you to take immediate action to address the issue and improve the quality of the manufacturing process. A subscription is not the best option as it simply sends a regular email with the dashboard content, but does not provide a specific notification when the daily percentage of defective products exceeds 3%. A smart narrative visual and a Q&A visual are not relevant in this situation as they do not provide a way to set a specific threshold or receive notifications when that threshold is exceeded.“
Unattempted
“B. An alert. Explanation: An alert is the most suitable option for this situation as it allows you to set a specific threshold for the daily percentage of defective products and receive a notification when that threshold is exceeded. This will allow you to take immediate action to address the issue and improve the quality of the manufacturing process. A subscription is not the best option as it simply sends a regular email with the dashboard content, but does not provide a specific notification when the daily percentage of defective products exceeds 3%. A smart narrative visual and a Q&A visual are not relevant in this situation as they do not provide a way to set a specific threshold or receive notifications when that threshold is exceeded.“
Question 48 of 52
48. Question
You want to create a data source that can be accessed by external users. Which type of data source should you use?
Correct
“Proposition: Common Data Service Explanation: Common Data Service (CDS) is a cloud-based data storage and management service provided by Microsoft. It allows users to securely store and manage data from various sources, including Dynamics 365, Office 365, and custom applications. CDS provides a unified data model and a set of APIs that can be used to access and manipulate data. It also provides security and access controls to ensure that only authorized users can access the data. CDS is a suitable option for creating a data source that can be accessed by external users because it provides a secure and scalable platform for storing and managing data. SQL Server is a relational database management system that is commonly used for storing and managing data. While it is a powerful tool for data management, it may not be the best option for creating a data source that can be accessed by external users, as it requires a lot of configuration and management to ensure security and accessibility. SharePoint is a web-based collaboration and document management platform that can be used to store and share data. While it is a suitable option for collaboration and document management, it may not be the best option for creating a data source that can be accessed by external users, as it may not provide the necessary security and access controls. OneDrive is a cloud-based file storage and sharing service provided by Microsoft. While it is a suitable option for storing and sharing files, it may not be the best option for creating a data source that can be accessed by external users, as it may not provide the necessary security and access controls for managing data.“
Incorrect
“Proposition: Common Data Service Explanation: Common Data Service (CDS) is a cloud-based data storage and management service provided by Microsoft. It allows users to securely store and manage data from various sources, including Dynamics 365, Office 365, and custom applications. CDS provides a unified data model and a set of APIs that can be used to access and manipulate data. It also provides security and access controls to ensure that only authorized users can access the data. CDS is a suitable option for creating a data source that can be accessed by external users because it provides a secure and scalable platform for storing and managing data. SQL Server is a relational database management system that is commonly used for storing and managing data. While it is a powerful tool for data management, it may not be the best option for creating a data source that can be accessed by external users, as it requires a lot of configuration and management to ensure security and accessibility. SharePoint is a web-based collaboration and document management platform that can be used to store and share data. While it is a suitable option for collaboration and document management, it may not be the best option for creating a data source that can be accessed by external users, as it may not provide the necessary security and access controls. OneDrive is a cloud-based file storage and sharing service provided by Microsoft. While it is a suitable option for storing and sharing files, it may not be the best option for creating a data source that can be accessed by external users, as it may not provide the necessary security and access controls for managing data.“
Unattempted
“Proposition: Common Data Service Explanation: Common Data Service (CDS) is a cloud-based data storage and management service provided by Microsoft. It allows users to securely store and manage data from various sources, including Dynamics 365, Office 365, and custom applications. CDS provides a unified data model and a set of APIs that can be used to access and manipulate data. It also provides security and access controls to ensure that only authorized users can access the data. CDS is a suitable option for creating a data source that can be accessed by external users because it provides a secure and scalable platform for storing and managing data. SQL Server is a relational database management system that is commonly used for storing and managing data. While it is a powerful tool for data management, it may not be the best option for creating a data source that can be accessed by external users, as it requires a lot of configuration and management to ensure security and accessibility. SharePoint is a web-based collaboration and document management platform that can be used to store and share data. While it is a suitable option for collaboration and document management, it may not be the best option for creating a data source that can be accessed by external users, as it may not provide the necessary security and access controls. OneDrive is a cloud-based file storage and sharing service provided by Microsoft. While it is a suitable option for storing and sharing files, it may not be the best option for creating a data source that can be accessed by external users, as it may not provide the necessary security and access controls for managing data.“
Question 49 of 52
49. Question
3. What is the difference between a measure and a calculated column in Power BI?
Correct
“The correct proposition is: A measure is an aggregation of data, while a calculated column is a new column created from existing data. Explanation: A measure is a calculation that performs an aggregation on a set of data, such as sum, average, or count. Measures are used to perform calculations on data that is already in the model. They are dynamic and respond to filters and slicers. On the other hand, a calculated column is a new column that is created by using a formula that references other columns in the same table. Calculated columns are static and do not respond to filters or slicers. They are used to add new data to a table, such as a calculated percentage or a concatenated string. Therefore, the difference between a measure and a calculated column is that a measure is an aggregation of data, while a calculated column is a new column created from existing data.“
Incorrect
“The correct proposition is: A measure is an aggregation of data, while a calculated column is a new column created from existing data. Explanation: A measure is a calculation that performs an aggregation on a set of data, such as sum, average, or count. Measures are used to perform calculations on data that is already in the model. They are dynamic and respond to filters and slicers. On the other hand, a calculated column is a new column that is created by using a formula that references other columns in the same table. Calculated columns are static and do not respond to filters or slicers. They are used to add new data to a table, such as a calculated percentage or a concatenated string. Therefore, the difference between a measure and a calculated column is that a measure is an aggregation of data, while a calculated column is a new column created from existing data.“
Unattempted
“The correct proposition is: A measure is an aggregation of data, while a calculated column is a new column created from existing data. Explanation: A measure is a calculation that performs an aggregation on a set of data, such as sum, average, or count. Measures are used to perform calculations on data that is already in the model. They are dynamic and respond to filters and slicers. On the other hand, a calculated column is a new column that is created by using a formula that references other columns in the same table. Calculated columns are static and do not respond to filters or slicers. They are used to add new data to a table, such as a calculated percentage or a concatenated string. Therefore, the difference between a measure and a calculated column is that a measure is an aggregation of data, while a calculated column is a new column created from existing data.“
Question 50 of 52
50. Question
A user creates a Power BI report named ReportA that uses a custom theme.You create a dashboard named DashboardA.You need to ensure that DashboardA uses the custom theme. The solution must minimize development effort.Which two actions should you perform? Each correct answer presents part of the solution.NOTE: Each correct selection is worth one point.
Correct
“A and B are the correct propositions. Explanation: A. Publish ReportA to Power BI: By publishing ReportA to Power BI, the custom theme used in the report will be available in the Power BI service. This will allow DashboardA to access the custom theme and use it. B. From ReportA save the current theme: By saving the current theme used in ReportA, you can easily apply it to DashboardA. This will minimize development effort as you do not need to create a new custom theme for DashboardA. C. Publish ReportA to the Microsoft Power BI Community theme gallery: This proposition is incorrect as the Microsoft Power BI Community theme gallery is a public gallery where users can share their custom themes with others. It is not necessary to publish the report to this gallery to use the custom theme in DashboardA. D. From DashboardA, create a custom theme: This proposition is incorrect as creating a new custom theme for DashboardA will require additional development effort. It is more efficient to use the existing custom theme from ReportA.“
Incorrect
“A and B are the correct propositions. Explanation: A. Publish ReportA to Power BI: By publishing ReportA to Power BI, the custom theme used in the report will be available in the Power BI service. This will allow DashboardA to access the custom theme and use it. B. From ReportA save the current theme: By saving the current theme used in ReportA, you can easily apply it to DashboardA. This will minimize development effort as you do not need to create a new custom theme for DashboardA. C. Publish ReportA to the Microsoft Power BI Community theme gallery: This proposition is incorrect as the Microsoft Power BI Community theme gallery is a public gallery where users can share their custom themes with others. It is not necessary to publish the report to this gallery to use the custom theme in DashboardA. D. From DashboardA, create a custom theme: This proposition is incorrect as creating a new custom theme for DashboardA will require additional development effort. It is more efficient to use the existing custom theme from ReportA.“
Unattempted
“A and B are the correct propositions. Explanation: A. Publish ReportA to Power BI: By publishing ReportA to Power BI, the custom theme used in the report will be available in the Power BI service. This will allow DashboardA to access the custom theme and use it. B. From ReportA save the current theme: By saving the current theme used in ReportA, you can easily apply it to DashboardA. This will minimize development effort as you do not need to create a new custom theme for DashboardA. C. Publish ReportA to the Microsoft Power BI Community theme gallery: This proposition is incorrect as the Microsoft Power BI Community theme gallery is a public gallery where users can share their custom themes with others. It is not necessary to publish the report to this gallery to use the custom theme in DashboardA. D. From DashboardA, create a custom theme: This proposition is incorrect as creating a new custom theme for DashboardA will require additional development effort. It is more efficient to use the existing custom theme from ReportA.“
Question 51 of 52
51. Question
What is the purpose of a dataflow in Power BI?
Correct
“Proposition: To transform and clean data. Explanation: The purpose of a dataflow in Power BI is to transform and clean data before it is used for creating visualizations or shared with other users. Dataflows allow users to extract data from various sources, transform it into a usable format, and then load it into Power BI for analysis. This process includes tasks such as removing duplicates, filtering data, and creating calculated columns. Once the data is cleaned and transformed, it can be used to create visualizations or shared with other users. However, the primary purpose of a dataflow is to ensure that the data is accurate and usable for analysis.“
Incorrect
“Proposition: To transform and clean data. Explanation: The purpose of a dataflow in Power BI is to transform and clean data before it is used for creating visualizations or shared with other users. Dataflows allow users to extract data from various sources, transform it into a usable format, and then load it into Power BI for analysis. This process includes tasks such as removing duplicates, filtering data, and creating calculated columns. Once the data is cleaned and transformed, it can be used to create visualizations or shared with other users. However, the primary purpose of a dataflow is to ensure that the data is accurate and usable for analysis.“
Unattempted
“Proposition: To transform and clean data. Explanation: The purpose of a dataflow in Power BI is to transform and clean data before it is used for creating visualizations or shared with other users. Dataflows allow users to extract data from various sources, transform it into a usable format, and then load it into Power BI for analysis. This process includes tasks such as removing duplicates, filtering data, and creating calculated columns. Once the data is cleaned and transformed, it can be used to create visualizations or shared with other users. However, the primary purpose of a dataflow is to ensure that the data is accurate and usable for analysis.“
Question 52 of 52
52. Question
Note: This question is part of a series of questions that present the same scenario. Each question in the series contains a unique solution that might meet the stated goals. Some question sets might have more than one correct solution, while others might not have a correct solution.After you answer a question in this section, you will NOT be able to return to it. As a result, these questions will not appear in the review screen.You are modeling data by using Microsoft Power BI. Part of the data model is a large Microsoft SQL Server table named Order that has more than 100 million records.During the development process, you need to import a sample of the data from the Order table.Solution: From Power Query Editor, you import the table and then add a filter step to the query.Does this meet the goal?
Correct
B. No
While adding a filter step in Power Query Editor can reduce the size of the data imported initially, it’s not the most efficient approach for working with a large table (over 100 million records) for the following reasons:
Performance: Filtering a large table even in Power Query can be slow and resource-intensive.
Unnecessary import: You’re still importing the entire table initially, even though you only need a sample.
Here are better solutions for working with a large sample of the Order table:
Use SQL Server Sampling: Leverage sampling methods directly within your SQL Server query to retrieve a representative subset of the data. Power BI can then import this smaller sample set.
Use Power Query Sampling: Power Query offers built-in sampling functionalities. You can specify a percentage or a fixed number of rows to import for initial development purposes.
Use DirectQuery: If you don’t need the data physically within Power BI and only require visualizations, consider using DirectQuery mode. This allows Power BI to query the SQL Server table directly, reducing the initial import size.
These approaches minimize the amount of data imported initially, improving performance and efficiency during the development process.
Incorrect
B. No
While adding a filter step in Power Query Editor can reduce the size of the data imported initially, it’s not the most efficient approach for working with a large table (over 100 million records) for the following reasons:
Performance: Filtering a large table even in Power Query can be slow and resource-intensive.
Unnecessary import: You’re still importing the entire table initially, even though you only need a sample.
Here are better solutions for working with a large sample of the Order table:
Use SQL Server Sampling: Leverage sampling methods directly within your SQL Server query to retrieve a representative subset of the data. Power BI can then import this smaller sample set.
Use Power Query Sampling: Power Query offers built-in sampling functionalities. You can specify a percentage or a fixed number of rows to import for initial development purposes.
Use DirectQuery: If you don’t need the data physically within Power BI and only require visualizations, consider using DirectQuery mode. This allows Power BI to query the SQL Server table directly, reducing the initial import size.
These approaches minimize the amount of data imported initially, improving performance and efficiency during the development process.
Unattempted
B. No
While adding a filter step in Power Query Editor can reduce the size of the data imported initially, it’s not the most efficient approach for working with a large table (over 100 million records) for the following reasons:
Performance: Filtering a large table even in Power Query can be slow and resource-intensive.
Unnecessary import: You’re still importing the entire table initially, even though you only need a sample.
Here are better solutions for working with a large sample of the Order table:
Use SQL Server Sampling: Leverage sampling methods directly within your SQL Server query to retrieve a representative subset of the data. Power BI can then import this smaller sample set.
Use Power Query Sampling: Power Query offers built-in sampling functionalities. You can specify a percentage or a fixed number of rows to import for initial development purposes.
Use DirectQuery: If you don’t need the data physically within Power BI and only require visualizations, consider using DirectQuery mode. This allows Power BI to query the SQL Server table directly, reducing the initial import size.
These approaches minimize the amount of data imported initially, improving performance and efficiency during the development process.
X
Use Page numbers below to navigate to other practice tests