Tuesday, November 24, 2009

How to insert two queries into one BEx Analyzer workbook.

I frequently need to use data form different InfoProviders. Sometimes, instead creating MultiProvider, it is faster to put two or more queries into one workbook and create separate tab to display joined data. Here are 7 steps to create such a solution:

1. Create queries you would like to join.
2. Open one of the queries in BEx Analyzer and save it as a workbook.
3. Create two additional tabs in the workbook, and give them names (e.g., query2, results)
4. Edit the query2 tab by adding design items: click BEx Analyzer > Design Toolbar > Insert Analysis Grid
5. Click on the Properties dialog box, change Data Provider's name and click create button.
6. Choose the second query and confirm your choice.
7. Create a table, on the result tab, that merge data form both queries. Save the workbook.

What about selection screen? The variables related to the queries will be displayed on one selection screen, if you use the same variable in the queries - there will be only one field for the shared variables.

Use of Analysis Process Designer in BI7

Everyone who worked with BI 7.0 knows that Analysis Process Designer (APD) is a workbench for creating, executing, and monitoring analysis processes. The analysis process is primarily based on data that was consolidated in the Data Warehouse and that exists in InfoProviders. One of the applications of APDs from a technical point of view would be feeding query results into a DataStore object or an attribute of a characteristic. In this post I review a few examples on how consultants may use APDs for addressing particular analysis tasks.



Analysis Process Designer allows you to set up a model where you move data from source to target and do some transformations on the way. As a source we can use any InfoProvider in the data model. The following types of data target are available in the Analysis Process Designer:

● Attributes of a characteristic

● DataStore objects

● Files

● CRM attributes

● Target groups for SAP CRM

● Data mining models:

○ Training the decision tree

○ Training the clustering model

○ Training the scoring model (regression)

○ Training data mining models from third parties

○ Creating association analysis models


1. Examples of business applications
1.1. ABC classification for customers

In ABC classification we assign customers to certain categories based on business rules. For example, you can classify your customers into three classes A, B and C according to the sales revenue or profit they generate. When you choose ABC classification in APD you have to specify the characteristic for which the classification is to be performed, its attribute, key figure, appropriate query, and threshold values for the individual ABC classes.
1.2. Scoring (traffic light) model

In a number of BI scenarios we may have a requirement for generating scoring or traffic light indicators for a certain set of KPIs. We may want to know, for example, how close the actual value is to the budgeted one. A range of traffic lights (red/yellow/green) needs to be displayed by geography, product group, profit center, etc.



Traffic light indicators need to be assigned to each report line based on a complex logic. For example, if one or two countries in the region are underperforming, region’s indicator is set to yellow. If more then two countries are underperforming region’s indicator for the analyzed period should be set to red.



As values for traffic light indicators are not cumulative they have to be calculated separately for each level of granularity. Knowing indicators at the lowest level of granularity does not help much in deriving them for upper levels, as there is a business rule defined for each level separately. Therefore, we have to build a set of queries for each level of data model where traffic light indicators need to be displayed. APD would help us feeding query results into the cube reporting on scoring results.


2. Example of data flow for scoring model

The following data flow model can be used for calculating scoring results. The infocube contains measures (KPIs) used for scoring, such as sales volume and sales budget. It also has a set of traffic light KPIs that need to be populated with indicators for each granularity level.


3. Why using APD in the scoring model

It is important to note that in the scoring model instead of APD/Query approach one can use a transformation (formerly known as an update rule) connecting cube to itself. In the start/end routine we can build business logic required for scoring results calculations:

image011.jpg

However, this approach requires complex development in ABAP. Specific scoring requirements have to be documented by a business user in advance, which usually makes development cycle longer. Any adjustments to the scoring logic require ABAP code modifications.



Alternatively, when we use Query/APD approach, analysts are able to define scoring requirements in the queries, test and modify them whenever it is needed. They can also run queries and check preliminary results. Needless to say, it is usually easier to modify and test queries rather than transformations with ABAP code.

Monday, November 23, 2009

Why do we need to debug: BREAK POINT

Definition:
The BREAK POINT is a debugging aid. When we run a program normally, it is interrupted at the statement, and the system automatically starts the debugger, allowing you to display the contents of any fields in the program and check how the program continues. If the program is running in the background or in an update task, the system generates a system log message.

Break point types:

1. Static
2. Dynamic
1. Directly set
2. Specially set
i. At statement
ii. At event
iii. At function module
iv. At system exceptions
Static Breakpoint

Written in ABAP program.
Should be used in development environment only.

If sy-subrc eq 0.

break-point.

endif .


Dynamic Breakpoint

* User specific.
* Can be set/deactivated/deleted at runtime.
* Deleted automatically when the user logs off from R/3 system.
* Can be set even when a program is locked by other programmer.
* Logics can be built while defining it.


Different ways of putting the Break-point in the Program


1) Writing the Break point in the program.
2) Writing the Break statement along with the user name in the program Break sy-uname.
3) By Clicking the Red Button on a line, one can create the Break point at that line.

How to Create/Delete Break Point

* You can set the break point by duble clicking on the statement during the debugging in
* You can set the break point through menu option: place the cursor on the statement where you want to put the break point.
o Goto the menu bar > Break point > select the Create/delete.
* After creating the break point select Ctrl+S / Select Save in the menu bar - to save all the break points for that session.
* Viewing all the break points in a ABAP Program.
o For viewing all the break point in the ABAP Program: Goto – Utilities - Break points - Display
* If we want to delete the selected break point then click on the display selected Object.
* After deleting all the break points, the table will be empty and no break point is left in the table.

How to Activate/Deactivate Break point

1. Activate/deactivate all: It will activate the single deactivated break point and vice-versa.
2. Delete All: It will delete all the break points, which is in the programs
3. Deactivate All: It will deactivate all the break points in the program.
4. Activate All: It will activate all the deactivated break points.
5. Save: After setting the break point, by clicking the save button it will save the break point in the program.



One can put a break-point at a Statement, Subroutine, Function Module and exception always at runtime.

If in a specific program there is one function module and two subroutines, in that case

The Program will stop at the statement select.

The Program will stop at subroutine.
The Program will stop at Function Module

Thursday, November 19, 2009

FAQ - Information Broadcasting and General

Where can I get up-to-date information about broadcasting using an Enterprise Portal?

See SAP Note 969040.
Which browsers, e-mail servers, and clients support MHTML?

SAP cannot provide a complete list of the software that supports MHTML. Please clarify this with your software vendor. Some of the systems that support MHTML are listed below, but it is still necessary to clarify the details with the vendor.
MHTML format (MIME Encapsulation of Aggregate HTML Documents, see ftp://ftp.ietf.org/rfc/rfc2557.txt), is supported by the following Web browsers:
• Microsoft Internet Explorer
In addition, it is supported by the following email servers and clients:
• Microsoft Outlook
• IBM Lotus Domino 6 (partial)
• BM Lotus Domino Everyplace 3.0
What do I need to consider in my support package stack upgrade planning when using information broadcasting to the Portal?

For information broadcasting to work properly, you need to have the same support package stack on both the BI system and the SAP NetWeaver Portal (with Knowledge Management). Upgrades need to be done simultaneously on both sides.
Can I broadcast e-mails to distribution lists that are defined in a groupware system (e.g. MS Exchange, Lotus Notes)?

Yes. For more information, see SAP Note 834102 (SMP login required).
What is the best approach and the best resources to learn about Information Broadcasting?

We recommend the following resources (in this order):
1. For an overview, see the E-Learning Maps on BI in SAP NetWeaver 7.0, http://service.sap.com/RKT-NetWeaver (SMP login required). Information broadcasting is documented to a great detail in standard BI documentation.
2. If you integrate broadcasting with the SAP NetWeaver Portal, you have to implement the required settings for BI in the IMG (transaction SPRO) under "SAP Transaction SPRO/SAP Reference IMG ->SAP Customizing Implementation Guide ->SAP NetWeaver ->Business Intelligence ->Reporting-Relevant Settings ->Web-Based Settings ->Integration into Portal". The Customizing step, "Overview: Integration into Portal" contains a detailed description of the steps and settings required in BI and Portal.

What is the pricing policy on information broadcasting?

Contact your local account executive for details.
Can I broadcast graphs in addition to tabular reports?

Yes. When you broadcast a report, it is broadcast with the current display.
Can I use a single report to distribute filtered or user or group-specific information to each individual user?

Yes. there are two possibilties:
1. User-specific broadcasting based on existing users.
2. Data-bursting based on user information in BI master-data.
For example, a single cost center report that is broadcast to all CC managers once a week; regional sales report that broadcasts only the regional results for each group of sales people and sales managers see all groups for which they are responsible. For instructions, see the standard SAP BI documentation.
Can I broadcast at any time?

Yes. Depending on the authorization settings of your SAP NetWeaver BI system, users can set up their own ad hoc schedules.
Can I change my broadcast settings after I set them up?

Yes. From the "information broadcasting" tab page of the BEx Web Analyzer, you can select "Overview of Scheduled Settings" to manage all broadcasts you have authorizations to control.
Do individual users see customized precalculated results for their broadcast report (such as only their region, only their cost center, only their benefits information)?

Yes. Authorizations can be leveraged by the broadcasting process to narrow the results of each individual user. Additionally, you can use data bursting to tailor the result of broadcasts even if the recipients are not known BI users.
Do all users have to be defined in the SAP NetWeaver BI system in order to broadcast to them?

No. E-mail addresses can also be targeted for broadcasts. Using data bursting, you can even send personalized broadcasts to non-BI users.
Can I compress output to avoid issues with e-mail size limitations?

Yes. SAP NetWeaver BI provides a zipping service that can be applied to broadcasts.
Can I subscribe to broadcast results and be notified when new broadcasts are distributed?

Yes. This is one of the advantages of incorporating Knowledge Management services of SAP NetWeaver Portal. When broadcasts are sent to the portal, reports become KM documents to which the user can subscribe.
Can I incorporate existing corporate email groups and or users into the broadcasting wizard?

Broadcasts can be sent to registered SAP NetWeaver BI users, SAP NetWeaver BI roles, and external e-mail accounts. Corporate e-mail groups have to be imported by copying and pasting the e-mail address into the Broadcaster. They will be stored there in the user's history for reuse.
Can I broadcast from one language (such as English) to other languages?

Yes. This assumes all language-relevant elements of the report have been maintained in the target language (such as texts and hierarchies).
Can I broadcast only to myself and not to everyone?

Yes. You can either send an e-mail broadcast to yourself or to your personal portfolio (KM folder) in SAP NetWeaver Portal. In both cases, the broadcast can be a single, immediate broadcast or a regularly scheduled broadcast.
We have thousands of documents in SAP NetWeaver BI Content framework - is there a migration path to get these into KM? Is there a way to access these directly without having to open the related SAP NetWeaver BI report(s)?

A migration process will be available in the mid future.
In SAP BW 3.5, documents can be accessed from within the KM function of Enterprise Portal using corresponding NetWeaver BI repository managers. New documents can also be created and stored in SAP NetWeaver BI Content framework from within KM using these repository managers.
With BI in SAP NetWeaver 7.0, you can migrate your documents from BI to KM (see the documentation under "Business Intelligence ->Data Warehousing ->Data Warehouse Management/Documents ->Working with Documents in Knowledge Management").
Can I setup the broadcast so the current date will be included in the broadcast header?

Yes. Variables can be incorporated into the text of the broadcast to provide the date or time of the broadcast in the header of the broadcast, if required. This can also be combined with free text for more flexible and descriptive broadcast headers.
Does Reporting Agent alerting have a migration path to the new broadcasting-based alerting? How is information broadcasting integrated?

Reporting Agent settings are still supported in SAP NetWeaver 7.0. Your existing scenarios still run. For new alert scenarios, we recommend using the BEx Broadcaster instead of the Reporting Agent. There is no migration of Reporting Agent settings to Broadcaster settings.
Can I use information broadcasting to distribute precalculated queries, Web applications, and workbooks to a third-party file server, Web server or document management systems?

Yes. With information broadcasting, you can precalculate queries, Web applications, and workbooks and publish them into the Knowledge Management of the SAP NetWeaver Portal.
In KM, you can easily create a Repository Manager (CM repository with persistence mode FSDB) that is attached to a file system directory (for example, the directory of an Internet Information Server (IIS)). You have to create a link in the KM folder of documents to the folder of the CM Repository attached to the file system or you can define your CM Repository as an entry point in KM. For more information, see SAP Note 827994 (SMP login required).
Information broadcasting can automatically put a new report on the third-party file server (for example, using the data change event in the process chain). KM offers repository managers for many different file servers, Web servers, and document management systems (such as IIS and Documentum):
1. Create CM Repository attached to file system.
2. Use iView KM Content to create subfolder in file system (optional).
3. Set permission to Administrator (optional).
4. Create link in /documents to folder of CM Repository attached to file system or define CM Repository as entry point. (See SAP Note 827994.)
5. Schedule Broadcasting Settings that export to a linked folder of CM Repository.
Because documents created via Information Broadcasting have additional attributes attached to them which mark them as broadcasted documents, it is not possible to store these kind of documents in a "pure" file system repository because such a repository usually only stores properties like "last changed", "creator", etc. Fortunately, KM provides a mechanism to nevertheless use a file system repository to store the documents. The additional properties will be stored in the database. Details are given here and here.
The "persistence mode" of the repository must be "FSDB" to allow this kind of behavior. Please note that because of the distributed storage of file and additional properties, the property assignment will be lost when moving around the document in the file system using some non-KM tool like windows explorer.
Are there any new hardware or sizing needs regarding information broadcasting in SAP NetWeaver 7.0 compared to the SAP BW 3.x function?

An information broadcaster query is treated exactly like a normal query. There are no additional hardware requirements if you broadcast queries that take hardware calculations into account. If, however, there is a need for a significant number of additional broadcasting queries, consider reviewing your sizing.
In addition, using Broadcasting not only using e-mail but also with the SAP NetWeaver Portal requires the additional installation of a J2EE Server with SAP NetWeaver Portal or KM.
I plan to schedule the broadcast of a fixed number of documents on a regular basis. How can I calculate the system requirements needed?

To find out your sizing needs, you can use the Quick Sizer Tool (http://service.sap.com/quicksizer - SMP login required). The load caused by pre-calculation of queries must be mapped to an adequate number of virtual users.
Example: the load caused by precalculating of 100 queries in 2 hours can be simulated by 50 users of category "InfoConsumer" since, by definition, each "InfoConsumer" causes the load of one navigation step per hour.
Is there a performance difference in accessing SAP NetWeaver BI queries using online links (URLs) in a KM folder or directly using a URL within a Web browser?

There should not be any difference in performance.
What is important to know regarding J2EE memory consumption of information broadcasting scenarios?

It is recommended to allocate at least 1 GB heap size for the J2EE Engine. The lower the heap size, the time needed for full garbage collection. Frequent full garbage collection should be avoided. As a rule of thumb, the J2EE engine should not spend more than 5% of its CPU time on garbage collection.
How do the broadcast channels E-mail and SAP EP KM Folders compare performancewise?

Sending broadcast reports by e-mail is 50-70% faster than deployment into the SAP NetWeaver Portal in the current support package stack for SAP NetWeaver 7.0.
We want to take advantage of information broadcasting. How can the contents sent, such as Excel workbooks, MHTs, PDFs be encrypted automatically?

The Broadcaster itself does not provide encryption. However, e-mails are sent out using an external SMTP server. Check if your SMTP server provides encryption and see also SAP Note 149926.
How can I broadcast from an SAP NetWeaver 7.0 BI system into SAP Enterprise Portal 6.0?

There are several options available to broadcasts into a SAP Enterprise Portal 6.0. We recommend using a WebDAV Repository Manager. For more information, see SAP Note 969040.
Can I broadcast from an SAP NetWeaver 7.0 BI system into SAP Enterprise Portal 5.0?

No. This is not possible, and no workaround is supported.
How can I broadcast from a SAP NetWeaver 7.0 BI system into a federated portal network?

There are several options available to broadcasts into a federated portal network. We recommend using a WebDAV Repository Manager. For more information, see SAP Note 969040.
Can I display broadcasts of the same SAP NetWeaver 7.0 BI system that are triggered using ABAP runtime and using Java runtime to the same KM folder in a federated portal?

Yes. Both types of broadcasts can be broadcast to the same KM folder in the local portal of the BI system. Using remote role assignment, the Business Explorer showcase role of the local portal can be displayed within the federated portal. The local portal acts as the producer and the federated portal acts as the consumer. The Business Explorer showcase role displays typical KM folders using BEx portfolio.

Tuesday, November 17, 2009

ABAP Tips and Tricks Database

http://wiki.sdn.sap.com/wiki/display/ABAP/ABAP+Tips+and+Tricks+Database

Date/time operations in ABAP.

With ABAP, you can do simple date calculations. If you need some advanced things like adding a month (not simple 30 days) to a date, SAP provides many function modules to do the job. Here’s my list of ABAP date functions, their names usually explain what they do:

CALC_DIFF_IN_MONTHS_DAYS
COMPUTE_YEARS_BETWEEN_DATES
DATE_CHECK_PLAUSIBILITY
DATE_COMPUTE_DAY
DATE_CONV_EXT_TO_INT
DATE_CONVERT_TO_FACTORYDATE
DATE_GET_WEEK
DATE_TO_PERIOD_CONVERT
FIRST_DAY_IN_PERIOD_GET
HOLIDAY_GET – holidays list for a plant
L_MC_TIME_DIFFERENCE – Calculate time difference in minutes
LAST_DAY_IN_PERIOD_GET
MONTH_NAMES_GET
MONTH_PLUS_DETERMINE
PERIOD_AND_DATE_CONVERT_OUTPUT
RP_ASK_FOR_DATE
RP_CALC_DATE_IN_INTERVAL
RP_LAST_DAY_OF_MONTHS
SD_DATETIME_DIFFERENCE
WEEK_GET_FIRST_DAY – convert YYYYWW to date

Monday, November 16, 2009

How To: Trigger Background Jobs with Background User

Summary
Most of the processing in SAP happens in term of background jobs. Some of these jobs are very critical and need to run within specified duration. In production support environment, many a times need arises to repair failure of these jobs. Sometimes our User ID lacks the authorization to run some of these jobs and hence results in missing SLAs or dependencies. The blog further will explain step by step solution how to trigger such jobs with background User ID which has most of the authorizations required.
Step by Step Solution
Identifying Background Job
Using TCode SM37, with filter set to respective User ID and job type as scheduled/released; identify the correct job that needs to be scheduled with Background User ID.
Goto Change Options
Select the appropriate job and then from menu options, select Job -> Change or use CTRL+F11. This will open up the job definition screen; here hit the ‘step’ button as shown.

Step List Overview
The Step button will lead to Step List Overview screen. Here, simply click on the job and hit the change button (CTRL+SHIFT+F7).

Changing User
In the next screen simply change the User from IDADMIN to ALEREMOTE and save the job.

Authentication
This approach is a safe approach as though we can change the user ID of any job but the user ID with which we created the job remains unchanged. This can be helpful in tracking any misuse of this functionality. Also it can help in audit purposes.

Thursday, November 12, 2009

BW SYSTEM TUNING

In an end user perspective, performance is nothing but, is the next logical dialog screen appears on his/her GUI without any long delay. If there is a delay, it appears to be a bad performing system.
In a traditional way, performance tuning of an SAP application deals with buffer management, database tuning, work process tuning, fragmentation of the database, reorganization of the database, reducing I/O contention, operating system tuning, table stripping and the list goes on depending on the nature of the system.
This document deals with more of performance tuning in a BW perspective rather than general R/3 parameter tuning. Like, query performance, data load performance, aggregate tuning etc.
This document will focus the following key aspects in a detailed fashion.
1. What are the different ways to Tune an SAP system? ( General )
2. What are the general settings we need to adapt in a good performing BW system?
3. What are the factors which influence the performance on a BW system?
4. What are the factors to consider while extracting data from source system?
5. What are the factors to consider while loading the data?
6. How to tune your queries?
7. How to tune your aggregates?
8. What are the different options in Oracle for a good performing BW system?
9. What are the different tools available to tune a BW system? (With screenshots).
10. What are the best practices we can follow in a BW system?
1. What are the different ways to tune an SAP system?

Aim of tuning an SAP system should focus on one major aspect. Availability of the next logical screen to all users (end users/business users/super users) with equal or unequal (depending on the business requirement) allocation of technical resources in a timely manner. And also we need to keep in mind that we have spent just the optimal amount of money on the technical resources.
There are two major paths we need follow to tune an SAP system.

Tune it depending on the business requirement.

Tune it depending on the technical requirement. Business requirement.

Consider how many Lines of businesses we have in our company. Which Lines of business uses which IT infrastructure and how efficiently or inefficiently does that LOB uses the IT infrastructure? Who are all my critical users? Is it possible to assign a part of the technical resources just for them to use? How is the growth of my database? What are the key LOB's and who are the key users influencing the growth in the database? What is the data most frequently used? Is that data available always? Likewise, the list goes on... Understanding the business requirement and we can tune the system accordingly. Technical requirement:

How many CPU's? How many disks? Is there an additional server node required? How balanced is the load? How much is the network speed? Is table stripping required? What is the hit ratio? What is the I/O contention? Should we reorganize? What is the efficiency of the operating system? How is the performance of BEX? Likewise here also the list goes on...
By gauging, analyzing and balancing the two lists of technical requirements and business requirements we can end up in a good performing SAP system.
2. What are the general settings we need to adapt in a good performing BW system?

Following are the main parameters we need to monitor and maintain for a BW system. To start with performance tuning in a BW system, we have to focus on the following parameters. Rsdb/esm/buffersize_kb.
Rsdb/esm/max_objects.
Rtbb/max_tables.
Rtbb/buffer_length.
Rdisp/max_wprun_time
Gw/cpic_timeout
Gw/max_conn
Gw/max_overflow_size
Rdisp/max_comm_entries
Dbs/ora/array_buf_size
Icm/host_name_full
Icm/keep_alive_timeout
Depending on the size of the main memory, the program buffer should be between 200 and 400 MB. Unlike in R/3 Systems, a higher number of program buffer swaps is less important in BW Systems and is often unavoidable since the information stored in the program buffer is significantly less likely to be reused. While the response times of R/3 transactions is only around several hundred milliseconds, the response times of BW queries takes seconds. However, by tuning the program buffer, you can only improve the performance by milliseconds.

Therefore, if the available main memory is limited, you should increase the size of the extended memory. However, the program buffer should not be set lower than 200 MB. If the available main memory is sufficient, the program buffer in BW 2.X/3.X systems should be set to at least 300 MB.

BW users require significantly more extended memory than R/3 users. The size of the extended memory is related to the available main memory but should not be lower than 512 MB.

Set the Maximum work process runtime parameter to maximum and also set the timeout sessions to be high. Set the parameter dbs/ora/array_buf_size to a sufficiently large size to keep the number of array inserts, for example, when you upload data or during the rollup, as low as possible. This improves the performance during insert operations.
The main performance-related tables in the BW environment are:

* F-Fact tables: /BI0/F
* E-Fact tables: /BI0/E
* Dimension tables: /BI0/D
* SID tables: /BI0/S
* SID tables (navigation attribute, time-independent): /BI0/X
* SID tables (navigation attribute, time-dependent): /BI0/YIn addition to the /BI0 tables delivered by SAP, you also have customer-specific /BIC tables with a naming convention that is otherwise identical.
Since objects and partitions are frequently created and deleted in BW, and extents are thus allocated and reallocated, you should use Locally Managed Table spaces (LMTS) in the BW environment wherever possible.
Since numerous hashes, bitmap and sort operations are carried out in the BW environment especially; you must pay particular attention to the configuration of the PGA and PSAPTEMP table spaces. These components are crucial factors in the performance of processing the operations described. You must therefore ensure that PGA_AGGREGATE_TARGET is set to a reasonable size and that PSAPTEMP is in a high-speed disk area. It may be useful to add up to 40 % of the memories available for Oracle to the PGA.
If you work with large hierarchies, you have to increase the size of this buffer considerably. You should be able to store at least 5,000 objects in the buffer.
The BW basis parameters must be set optimally for the BW system to work without errors and the system to perform efficiently. The recommendations for BW systems are not always the same as those for R/3 systems.
3. What are the factors which influence the performance on a BW system?

There are three major factors that influence the performance of a BW system.

ü How we administer the BW system?

ü Technical resources available.

ü How the entire BW landscape is designed?

BW ADMINISTRATION

First step to resolve most of the problems in BW system is Archive. Archive the most amount of data you can.Archive data from Info Cubes and ODS objects and delete the archived data from the BW database. This reduces the data volume and, thus, improves upload and query performance.
An archiving plan can also affect the data model. For a yearly update, an Multiprovider partitioning per year

The archiving process in the BW system works slightly differently to that in an R/3 environment. In an R/3 system, the data is written to an archive file. Afterwards, this file is read and the deleted from the database, driven by the content of the file. In a BW system, the data from the archived file is not used in the deletion process (only verified to be accessible and complete). The values of the selection characteristics, which have been used for retrieving data in the 'Write' job, are passed to the selective deletion of the data target. This is the same functionality that is available within data target management in the Administrator Workbench ('Contents' tab strip). This functionality tries to apply an optimal deletion strategy, depending on the values selected, that is, it drops a partition when possible or copies and renames the data target when more than a certain percentage of the data has to be deleted.
Reloading archived data should be an exception rather than the general case, since data should be archived only if it is not needed in the database anymore. When the archived data target is serving also as a data mart to populate other data targets, we recommend that you load the data to a copy of the original (archived) data target, and combine the two resulting data targets with a MultiProvider.
In order to reload the data to a data target, you have to use the export Data Source of the archived data target. You then trigger the upload either by using 'Update ODS data in data target' or by replicating the Data Sources of the MYSELF source system and subsequently scheduling an Info Package for the respective Info Source. In this scenario we have used the first option. Load balancing:
Load balancing provides the capability to distribute processing across several servers in order to optimally utilize the server resources that are available. An effective load balancing strategy can help you to avoid inefficient situations where one server is overloaded (and thus performance suffers on that server), while other servers go underutilized. The following processes can be balanced:
? Logon load balancing (via group login): This allows you to distribute the workload of multiple query/administration users across several application servers.
? Distribution of web users across application servers can be configured in the BEx service in SICF.
And also, Process chains, Data loads and data extractions should be routed to perform in specific target servers.
In some cases, it is useful to restrict the extraction or data load to a specific server (in SBIW in an SAP source system, or SPRO in BW), i.e. not using load balancing. This can be used for special cases where a certain server has fast CPUs and therefore you may want to designate it as an extraction or data load server.
Reorganize the table:
Logs of several processes are collected in the application log tables. These tables tend to grow very big as they are not automatically deleted by the system and can impact the overall system performance.
Table EDI40 can also grow very big depending on the number of IDOC records.
Depending on the growth rate (i.e., number of processes running in the system), either schedule the reorganization process (transaction SLG2) regularly or delete log data as soon as you notice significant DB time spent in table BALDAT (e.g., in SQL trace).


Delete regularly old RSDDSTAT entries.If several traces and logs run in the background, this can lead to bad overall performance and sometimes it's difficult to discover all active logs. So be sure to switch off traces and logs as soon as they are not used any more.
Technical resources available:
The capacity of the hardware resources represents highly significant aspect of the overall performance of the BW system in general. Insufficient resources in any one area can constraint performance capabilities
These include:
? Number of CPUs
? Speed of CPUs
? Memory
? I/O-Controller
? Disk architecture (e.g. RAID)
A BW environment can contain a DB server and several application servers. These servers can be configured individually (e.g. number of dialog and batch processes), so that the execution of the different job types (such as queries, loading, DB processes) can be optimized. The general guideline here is to avoid hot spots and bottlenecks.
For optimizing the hardware resources, it is recommended to define at least two operation modes: one for batch processing (if there is a dedicated batch window) with several batch processes and one for the query processing with several dialog processes.
Different application servers have separate buffers and caches. E.g. the OLAP cache (BW 3.x) on one application server does not use the OLAP cache on other servers.
BW landscape design:
Info Cube modeling is the process by which business reporting requirements are structured into an object with the facts and characteristics that will meet the reporting needs.
Characteristics are structured together in related branches called dimensions.
The key figures form the facts.
The configuration of dimension tables in relation to the fact table is denoted as "star schema".
For a BW system to perform better we should not combine dynamic characteristics in the same dimension in order to keep dimensions rather small. Example: Don't combine customer and material in one dimension if the two characteristics are completely independent. As a general rule, it makes more sense to have many smaller dimensions vs. fewer larger dimensions. Dimension tables should be sized less than 10% of the fact table.
Use MultiProvider (or logical) partitioning to reduce the sizes of the Info Cubes.
Example: Define Info Cubes for one year and join them via a MultiProvider so we can have parallel access to underlying basis Info Cubes, load balancing, and resource utilization.
Define large dimensions as line item dimensions (e.g. document number or customer number) if (as a rule of thumb) the dimension table size exceeds 10 % of the fact table(s) size; B-tree is generally preferable for cases where there is high cardinality (high number of distinct values)
Info Cubes containing non-cumulative key figures should not be too granular. A high granularity will result in a huge amount of reference points which will impact aggregate build significantly. Reference points can only be deleted by deleting an object key not specifying the time period, i.e. all available records for this key are deleted.
The data model has tremendous impact on both query AND load performance. E.g. bad dimension model. Example: Customer and material in one dimension instead of separate dimensions can lead to huge dimension tables and thus slows down query performance, as it is expensive to join a huge dimension table to a huge fact table. Transaction RSRV can be used to check the fact to dimension table ratio.
As non-cumulative key figures are well defined for every possible point in time (according to the calculation algorithm), it could make sense to restrict the validity to a certain time period. Example: If a plant is closed, it should not show up any stock figures. These objects can be defined as validity objects. Note that for every entry in the validity table, a separate query is generated at query runtime.
4. What are the factors to consider while extracting data from source system?

Data load performance can be affected by following key aspects.
Customer exits. à Check with RSA3, SE30 and ST05

Resource utilization. à SM50 / SM51

Load balancing. à SM50 / SM51 (Configure ROIDOCPRMS)

Data package size.

Indices on tables. à ST05

Flat file format.

Content Vs generic extractor. The size of the packages depends on the application, the contents and structure of the documents. During data extraction, a dataset is collected in an array (internal table) in memory. The package size setting determines how large this internal table will grow before a data package is sent. Thus, it also defines the number of Commit's on DB level.
Use RSMO and RSA3 to monitor the load.

Indices can be built on Data Source tables to speed up the selection process.
If there is a poor performance in data load, refer the following note
Note 417307 - Extractor package size: Collective note for applications.
If you define selection criteria in your Info Package and the selection of the data is very slow, consider building indices on the Data Source tables in the source system.
5. What are the factors to consider while loading the data?

There are two major aspects to consider while loading data.
I/O contention.

O/S Monitors. I/O contention.
High number of DB writes during large data loads.

Disk Layout and Striping. (What is located on the same disk or table space/DB space etc.?)At the time of data load we need to also check the transformation rules. à Use SE30 and ST05.The master data load creates all SIDs and populates the master data tables (attributes and/or texts). If the SIDs does not exist when transaction data is loaded, these tables have to be populated during the transaction data load, which slows down the overall process.

Another major function which could be performed at data load is buffering number ranges.SID number range can be buffered instead of accessing the DB for each SID.
If you encounter massive accesses to DB table NRIV via SQL trace (ST05), increase the number range buffer in transaction SNRO.

Always load master data before transaction data. The transaction data load will be improved, as all master data SIDs are created prior to the transaction data load, thus precluding the system from creating the SIDs at the time of load.
In transaction RSCUSTV6 the size of each PSA partition can be defined. This size defines the number of records that must be exceeded to create a new PSA partition. One request is contained in one partition, even if its size exceeds the user-defined PSA size; several packages can be stored within one partition.
The PSA is partitioned to enable fast deletion (DDL statement DROP PARTITION). Packages are not deleted physically until all packages in the same partition can be deleted.

Transformation rules are transfer rules and update rules. Start routines enable you to manipulate whole data packages (database array operations) instead of changing record-by-record. In general it is preferable to apply transformations as early as possible in order to reuse the data for several targets.

Flat files: Flat files can be uploaded either in CSV format or in fixed-length ASCII format. If you choose CSV format, the records are internally converted in fixed-length format, which generates overhead.
You can upload files either from the client or from the application server. Uploading files from the client workstation implies sending the file to the application server via the network - the speed of the server backbone will determine the level of performance impact, Gigabit backplanes make this a negligible impact.
The size (i.e., number of records) of the packages, the frequency of status IDocs can be defined in table RSADMINC (Transaction RSCUSTV6) for the flat file upload. If you load a large amount of flat file data, it is preferable to use fixed-length ASCII format, to store the files on the application server rather than on the client and to set the parameters according the recommendations in the referenced note.

If possible, split the files to achieve parallel upload. We recommend as many equally-sized files as CPUs are available.
6 / 7. How to tune your queries and aggregates?
The data in a Data Warehouse is largely very detailed. In SAP BW, the Info Cube is the primary unit of storage for data for reporting purposes. The results obtained by executing a report or query represent a summarized dataset.
An aggregate is a materialized, summarized view of the data in an Info Cube. It stores a subset of Info Cube data in a redundant form. When an appropriate aggregate for a query exists, summarized data can be read directly from the database during query execution, instead of having to perform this summarization during runtime. Aggregates reduce the volume of data to be read from the database, speed up query execution time, and reduce the overall load on the database.
A sound data model in BW should comprise of the following
Dimensional modeling.
Logical partitioning.
Physical partitioning.
The main purpose of aggregate is to accelerate the response time of the queries, by reducing the amount of data that must be read in the database for a navigation step. Grouping and filtering will enhance the value of an aggregate.
We can group according to the characteristic or attribute value, according to the nodes of the hierarchy level, and also filter according to a fixed value.
It is guaranteed that queries always deliver consistent data when you drilldown. This means that data provided when querying against an aggregate is always from the same set of data that is visible within an Info Cube.
Rollup
New data packets / requests that are loaded into the InfoCube cannot be used at first for reporting if there are aggregates that are already filled. The new packets must first be written to the aggregates by a so-called "roll-up". Data that has been recently loaded into an InfoCube is not visible for reporting, from the InfoCube or aggregates, until an aggregate roll-up takes place. During this process you can continue to report using the data that existed prior to the recent data load. The new data is only displayed by queries that are executed after a successful roll-up. See the attachment for more details on the technical process of a roll-up.
The split of a query is rule-based.
Parts of the query on different aggregation level are split.
Parts with different selections on characteristic are combined.
Parts on different hierarchy levels or parts using different hierarchies are split.
After the split, OLAP processor searches for an optimal aggregate each part. Parts which use the same aggregate will be combined again (in some cases it is not possible to combine them)
Maintaining an aggregate: RSDDV.

After selecting a particular info cube, we could drill down to the options of the aggregate to tune each of them.


This is the same screen for BI Accelerator index.
RSDDBIAMON:
This is another important T code where we could perform the following actions. Possible actions
Restart host: restarts the BI accelerator hardware
Restart BIA server: restarts all the BI accelerator servers and services. This includes the name server and index server
Restart BIA index server: restarts the index server. (The name servers are not restarted.) Rebuild BIA indexes: If a check discovers inconsistencies in the indexes, you can use this action to delete and rebuild all the BI accelerator indexes.
Reorganize BIA landscape: If the BI accelerator server landscape is unevenly distributed, this action redistributes the loaded indexes on the BI accelerator servers
Checks
Connection Check
Index Check


In our system BIA monitor is not set up. So, we need to set up this. Here am not going to set up this, because it might affect few other RFC destinations.
Query design: Multi-dimensional Query.
Inclusion / Exclusion.
Multi provider query.
Cell calculation
Customer exits.
Query read mode.
Every Query should start with a relatively small result set; let the user drill down to more detailed information.
Do not use ODS objects for multi-dimensional reporting.
Queries on Multi Providers usually access all underlying Info Providers, even if some cannot be hit as no key figures within the query definition are contained in this Info Provider.
In ORACLE, fact tables can be indexed either by bitmap indices or by B-tree indices. A bitmap index stores a bitmap stream for every characteristic value. Bitmap indices are suitable for characteristics with few values. Binary operations (AND or OR) are very fast.
B-tree indices are stored in a (balanced) tree structured. If the system searches one entry, it starts at the root and follows a path down to the leaf where the row ID is linked. B-tree indices suit for characteristics with lots of values.
In some cases, ORACLE indices can degenerate. Degeneration is similar to fragmentation, and reduces the performance efficiency of the indexes. This happens when records are frequently added and deleted.
The OLAP Cache can help with most query performance issues. For frequently used queries, the first access fills the OLAP Cache and all subsequent calls will hit the OLAP Cache and do not have to read the database tables. In addition to this pure caching functionality, the Cache can also be used to optimize specific queries and drill-down paths by 'warming up' the Cache; with this you fill the Cache in batch to improve all accesses to this query data substantially.
8. What are the different options in Oracle for a good performing BW system?
I/O hotspots:
The purpose of disk layout is to avoid I/O hot spots by distributing the data accesses across several physical disks. The goal is to optimize the overall throughput to the disks.

The basic rule is: stripe over everything, including RAID-subsystems.

Managing table spaces:Locally-Managed Table spaces manage their own extents by maintaining bitmaps in each data file. The bitmaps correspond to (groups of) blocks.
Be sure that all (bigger) table spaces are locally managed. Extent and partition maintenance is drastically improved, as DB dictionary accesses are minimized. Administration maintenance is also reduced.
Parallel query option:
ORACLE can read database table contents in parallel if this setting is active. BW uses this feature especially for staging processes and aggregate build. The Parallel Query Option is used by default. Be sure, that the init.ora-entries for PARALLEL_MAX_SERVERS are set appropriate to the recommendations in the ORACLE note.
Table partitioning:
Table partitions are physically separated tables, but logically they are linked to one table name. PSA tables and non-compressed F-fact table are partitioned by the system (by request ID). The (compressed) E-fact table can be partitioned by the user by certain time characteristics. For range-partitioned InfoCubes, the SID of the chosen time characteristic is added to both fact tables.
When using range partitioning, query response time is generally improved by partition pruning on the E fact table: all irrelevant partitions are discarded and the data volume to be read is reduced by the time restriction of the query.
In ORACLE, report SAP_DROP_EMPTY_FPARTITIONS can help you to remove unused or empty partitions of InfoCube or aggregate fact tables. Unused or empty partitions can emerge in case of selective deletion or aborted compression and may affect query performance as all F fact table partitions are accessed for queries on the InfoCube.
9. What are the different tools available to tune a BW system?

RSMO is used to monitor data flow to target system from source system. We can see data by request, source system, time request id etc. It provides all necessary information on times spent in different processes during the load (e.g., extraction time, transfer, posting to PSA, processing transformation rules, writing to fact tables). In the upload monitor you are also able to debug transfer and update rules.
If the extraction from an SAP source system consumes significant time, use the extractor checker (transaction RSA3) in the source system.
If the data transfer times are too high, check if too many work processes are busy (if so, avoid large data loads with "Update Data Targets in Parallel" method), and check swapping on one application server (set "rdisp/bufrefmode = "sendoff, exeauto" during load phase if you use several application servers).
RSRT

The Query Monitor (transaction RSRT) allows you to execute queries and to trace queries in a debug mode with several parameters (e.g., do not use aggregates, do not use buffer, show SQL statement).
In the debug mode, you can investigate if the correct aggregate(s) are used and which statistics the query execution generates. For checking reasons, you can switch off the usage of aggregates, switch to no parallel processing (see for more details in the MultiProvider section) or display the SQL statement and the run schedule.

Select a particular query and then click on performance info.

Like this query we can generate detailed performance info for every query. Below is the screen shot containing the detailed information for this query.

Query tracing:
RSRTRACE. The Query Trace Tool (transaction RSRTRACE) allows you to record some important function module calls and process and debug them at a later stage. Transaction RSRCATTTRACE takes the log of RSRTRACE as input and gives aggregates suggestions for the first execution AND all further navigations performed.

RSRV:BW objects can be checked for consistency in transaction RSRV and inconsistent objects can be repaired.

Apart from these BW tools, we have standard ABAP based tools like ST05, ST03n, SE30, SM50 and SM51 to check and measure the performance of the system.
In SE 30, we have special options like if cases, field conversions and monitoring the SQL interface.

ST05: The SQL trace (transaction ST05) records all activities on the database and enables you to check long runtimes on a DB table or several similar accesses to the same data.
If we find problems for an isolated process (upload or query) and we have analyzed for example the existence of aggregates, we can detail our analyses by using the SQL trace. Filter on a specific user (e.g. query user or extraction user ALEREMOTE) and make sure that no concurrent jobs run at the same time with this execution. We will find out which tables are accessed, what time is consumed and if some tables are accessed redundantly.

Another important tool to be used is ST10.
Here we can find out the statistics of the table and get more detailed info on a particular table. If we assume a general buffer problem, check ST10 and check the buffer settings of all tables; compare usage of buffer vs. invalidations.
ST04, DB02, SM50, SM51, ST02, ST06 are some the important tools which we normally use in R/3. These transaction codes should be extensively used here as well for gauging and optimizing the performance of the system. 10.What are the best practices we can follow in a BW system?

Best practices for a production BW system can be drafted only with a close interaction with the functional team and technical team and the nature of the production system.

Here, are couple of best practices we could implement to improve the performance.
Activate Transfer rule for info source:When you have maintained the transfer structure and the communication structure, you can use the transfer rules to determine how the transfer structure fields are to be assigned to the communication structure InfoObjects. You can arrange for a 1:1 assignment. But you can also fill InfoObjects using routines or constants.
Use scheduler:
The scheduler is the connecting link between the source systems and the SAP Business Information Warehouse. Using the scheduler you can determine when and from which InfoSource, DataSource, and source system, data (transaction data, master data, texts or hierarchies) is requested and updated.
The principle behind the scheduler relates to the functions of SAP background jobs. The data request can be scheduled either straight away or it can be scheduled with a background job and started automatically at a later point in time. We get to the data request via the Scheduler in the Administration Workbench Modeling, by choosing InfoSource Tree ® Your Application Component ® InfoSources ® Source System ® Context Menu ® Create InfoPackage

Assign several info sources: Assign several DataSources to one InfoSource, if you want to gather data from different sources into a single InfoSource. This is used, for example, if data from different IBUs that logically belongs together is grouped together in BW.
The fields for a DataSource are assigned to InfoObjects in BW. This assignment takes place in the same way in the transfer rules maintenance.

Wednesday, November 11, 2009

Listing all queries within a workbook

I have been working on an unusual and complex planning workbook that has 32 separate queries embedded in it and was recently shown a quick and easy way to list all of them. We have a naming convention to identify all the queries that should be in the workbook but this technique can be used to validate it because we all know that assumptions can be wrong!
The required steps are listed below:
1. Execute transaction SE37 to enter the Function Builder
2. Enter the function module: RRMX_WORKBOOK_QUERIES_GET
3. Click the Test/Execute button as highlighted in red below:

4. Enter in the workbook unique id and click the Execute button

You can source the workbook id by clicking open workbook within BEx Analyzer, selecting the workbook and then viewing its properties.

5. Click the View All Entries button as highlighted in red below:

6. All the queries are listed!

Tuesday, November 10, 2009

An ABAP Program to Effortlessly Check Data Load Status in BW

http://www.sdn.sap.com/irj/scn/index?rid=/library/uuid/e00a69e2-08c2-2b10-b894-abab48719955

interesting tricks -

* RSDG_MPRO_ACTIVATE is a program to activate the Multi providers in production system directly. If there are any inactive MultiProviders due to some transport or any other reason, this will activate the Multiprovider without affecting reporting.


* RSICCONT is a table used to delete the request of a data target including DSO or Cube. Facing any problem in deleting the request from a DSO or a cube while loading the data. Try this.



Needless to say try it on Sandbox, development system before attempting on production environment.

Current Day Data Load Monitor Program

The Conventional way to get the load details say for infocube is to open the context menu and click on manage on an infocube. If we want collect load details for 10 physical data targets then we have to repeat the above procedure for ten times. In a Support environment if a consultant want collect load details happened in that day then it could be tedious job to collect manually. This program doesn't require any input parameters at all. It will give us what ever the loads which happened on a particular day into all physical data targets in the system. We can perform filter and sort with resulted output as it is built using ALV.
________________________________________
Here is the ABAP code.
----REPORT ZDATA_LOAD_MONITOR.
type-pools: slis.
DATA: W_ICUBE TYPE RSINFOCUBE,
W_RNSIDLAST TYPE RSSID,
W_TIMESTAMP TYPE RSTIMESTMP,
ZDATE1(14),
ZDATE(8).
data: begin of itab occurs 0,
W_DTA type RSSTATMANDTA,
W_DTA_TYPE TYPE RSSTATMANDTA_TYPE,
W_LINESREAD type sy-tabix,
W_LINESTRANSFERRED type sy-tabix,
W_ISOURCE TYPE RSISOURCE,
W_SOURCE_DTA TYPE RSSTATMANDTA,
end of itab.
data:s_fieldcat type slis_fieldcat_alv,
t_fieldcat type slis_t_fieldcat_alv.
SELECT ICUBE RNSIDLAST TIMESTAMP FROM RSICCONT
INTO (W_ICUBE,W_RNSIDLAST,W_TIMESTAMP).
ZDATE1 = W_TIMESTAMP.
ZDATE = ZDATE1.
IF ZDATE equip SY-DATUM.
SELECT SINGLE dta DTA_TYPE anz_recs insert_recs ISOURCE SOURCE_DTA FROM RSSTATMANPART INTO itab
WHERE partnr = W_RNSIDLAST.
append itab.
ENDIF.
ENDSELECT.
perform fieldcatlog using 'TARGET' 'ITAB' 'W_DTA'.
perform fieldcatlog using 'TARGET TYPE' 'ITAB' 'W_DTA_TYPE'.
perform fieldcatlog using 'RECORDS SELECTED' 'ITAB' 'W_LINESREAD'.
perform fieldcatlog using 'RECORDS TRANSFERED' 'ITAB' 'W_LINESTRANSFERRED'.
perform fieldcatlog using 'SOURCE' 'ITAB' 'W_SOURCE_DTA'.
perform fieldcatlog using 'INFOSOURCE' 'ITAB' 'W_ISOURCE'.
sort itab by W_DTA W_DTA_TYPE W_LINESREAD W_LINESTRANSFERRED W_SOURCE_DTA W_ISOURCE .
delete ADJACENT DUPLICATES FROM ITAB COMPARING ALL FIELDS.
CALL FUNCTION 'REUSE_ALV_GRID_DISPLAY'
EXPORTING
I_CALLBACK_PROGRAM = sy-repid
IT_FIELDCAT = t_fieldcat
TABLES
T_OUTTAB = itab.
*&---------------------------------------------------------------------*
*& Form FIELDCATLOG
*&---------------------------------------------------------------------*
FORM FIELDCATLOG USING VALUE(P_0107)
p_0108
p_0109.
clear s_fieldcat.
s_fieldcat-seltext_l = P_0107.
s_fieldcat-fieldname = p_0109.
s_fieldcat-tabname = p_0108.
APPEND s_fieldcat to t_fieldcat.
ENDFORM. " FIELDCATLOG

Output after executing the program:

Output will have type of target i.e. DSO or Cube and we can know what is the source for the data target.

Monday, November 2, 2009

SAP BW Functions details PDF material download

Reporting, analysis and interpretation of business data is a central focus for companies that wish to guarantee competitiveness, optimize processes and be able to react quickly and in line with the market. As the following graphic illustrates, SAP Business Information Warehouse (SAP BW), as a core component of SAP NetWeaver data warehousing functionality, provides both a business intelligence platform and a suite of business intelligence tools.
The functions in detail BW document is aimed at people who have little experience with SAP BW and are looking for an overview of the features SAP BW provides. Knowledge of data warehouse and business intelligence solutions, as well as an awareness of Internet standards and standardized communication technologies is assumed. The functions in detail document summarizes the most important functions and tools provided by SAP BW as of release 3.5 and offers an overview of the ways in which these functions and tools can be used. BI Content, delivered for SAP BW, is not discussed in this document.You can find additional information about SAP BW functions in the SAP BW documentation in the SAP Help Portal, at http:\\help.sap.com under SAP NetWeaver → SAP NetWeaver →Information Integration → SAP Business Information Warehouse.
The contents of the SAP BW study material and tutorial download link is as follows.
This SAP BW functions guide includes
INTRODUCTION……………………………………. 7
1 DATA WAREHOUSING…………………….. 9
1.1 ADMINISTRATOR WORKBENCH ………… 9
1.2 DATA MODELING ………………………….. 9
1.2.1 DataSource………………………. 10
1.2.2 Persistent Staging Area Table 10
1.2.3 InfoSource………………………… 10
1.2.4 Update Rules …………………….. 11
1.2.5 InfoObjects……………………….. 11
1.2.6 Data Targets ……………………… 12
1.2.6.1 InfoCube …………………………………… 13
1.2.6.2 ODS Object……………………………….. 14
1.2.6.3 Performance-Optimized Data Target Modeling……………………………………………………………….. 15
1.2.7 InfoProvider………………………. 17
1.2.7.1 InfoSet ……………………………………… 18
1.2.7.2 SAP RemoteCube ………………………. 18
1.2.7.3 RemoteCube……………………………… 18
1.2.7.4 Virtual InfoCubes with Services …….. 19
1.2.7.5 MultiProviders …………………………….. 19
1.3 DATA RETRIEVAL……………………….. 19
1.3.1 SAP BW Source Systems……. 20
1.3.1.1 Transferring Data from SAP Source Systems…………………………………………………………………. 20
1.3.1.2 Transferring Data Between Data Targets Within a SAP BW or into Additional SAP Systems –
Data Mart Interface…………………………………… 22
1.3.1.3 Transferring Data from Flat Files …… 23
1.3.1.4 Transferring Data Based on the Simple Object Access Protocol (SOAP)……………………………. 23
1.3.1.5 Transferring Data from a System Using Third-Party ETL Tools – Staging BAPIs…………………. 25
1.3.1.6 Transferring Data from Database Management System Tables / Views - DB Connect …………. 25
1.3.1.7 Transferring Data with UD Connect .. 25
1.3.2 SAP BW as Source System…. 26
1.3.2.1 SAP BW as Source System for Additional SAP BW Systems – Data Mart Interface…………….. 26
1.3.2.2 Open Hub Service ………………………. 26
1.4 DATA WAREHOUSE MANAGEMENT …. 27
1.4.1 Authorizations ……………………. 27
1.4.2 Metadata Repository…………… 28
1.4.3 Document Management ……… 29
1.4.4 Transporting……………………… 30
1.4.5 Installing Business Content …. 31
1.4.6 Technical Content………………. 32
1.4.7 Analysis and Repair Environment………………………………………………………………………… 33
1.4.8 Process Management…………. 34
1.4.8.1 Scheduler………………………………….. 34
1.4.8.2 Monitor……………………………………… 35
1.4.8.3 Process Chains………………………….. 36
1.4.8.4 Data Archiving ……………………………. 37
2 BUSINESS INTELLIGENCE PLATFORM ………………………………………………………………………… 39
2.1 OLAP……………………………………… 39
2.1.1 The OLAP Processor………….. 39
2.1.2 Special OLAP Functions and Services…………………………………………………………………. 40
2.1.2.1 Aggregation ……………………………….. 41
2.1.2.2 Local Calculations………………………. 41
2.1.2.3 Hierarchies………………………………… 42
2.1.2.4 Currency Translation …………………… 45
2.1.2.5 Elimination of Internal Business Volume………………………………………………………………………… 45
2.1.2.6 Selecting Constants…………………….. 45
2.1.2.7 Reporting Authorizations ……………… 46
2.1.3 Report-Report Interface ………. 46
2.1.4 Performance Optimization …… 47
2.1.4.1…………………………………………………….. 47
2.1.4.2 Non-Cumulatives ………………………… 47
2.1.4.3 Aggregates ……………………………….. 48
2.1.4.4 OLAP Cache……………………………… 49
2.1.5 Open Analysis Interfaces…….. 50
2.1.5.1 XML for Analysis………………………… 50
2.1.5.2 Web Service for Access to Query Data …………………………………………………………………………. 51
2.1.5.3 OLAP BAPIs ………………………………. 51
2.1.5.4 OLE DB for OLAP and ADO MD……. 51
2.2 BUSINESS PLANNING AND SIMULATION (BW-BPS)……………………………………………………………. 51
2.2.1 Data Basis…………………………. 52
2.2.2 Modeling …………………………… 53
2.2.3 Data Selection……………………. 53
2.2.4 Manual Planning ………………… 53
2.2.5 Planning Functions …………….. 54
2.2.6 Variables …………………………… 54
2.2.7 Hierarchies and Attributes …… 54
2.2.8 Characteristic Relationships … 55
2.2.9 Front Ends ………………………… 55
2.2.10 Status and Tracking System… 55
2.3 ANALYSIS PROCESS DESIGNER……… 56
2.4 DATA MINING …………………………….. 57
2.5 ALERT MANAGEMENT AND BACKGROUND SERVICES FOR REPORTING…………………………………… 57
2.5.1 Evaluating Exceptions ………… 58
2.5.2 Printing Queries …………………. 58
2.5.3 Pre-calculating Web templates ……………………………………………………………………………. 58
2.5.4 Precalculating characteristic variables of type precalculated value sets……………………. 58
2.5.5 Managing bookmarks …………. 58
2.5.6 Crystal Reports Queries ……… 58
2.6 ICF SERVICES IN SAP BW…………… 58
3 BUSINESS INTELLIGENCE SUITE: REPORTING AND ANALYSIS…………………………………… 59
3.1 QUERY DESIGN ………………………….. 60
3.1.1.1 Tabular and Multi-dimensional Query Display ………………………………………………………………… 60
3.1.1.2 Variables…………………………………… 61
3.1.1.3 Structures …………………………………. 62
3.1.1.4 Restricted Key Figures ………………… 63
3.1.1.5 Calculated Key Figures………………… 64
3.1.1.6 Exception Cells …………………………… 64
3.1.1.7 Restricting Characteristics ……………. 65
3.1.1.8 Exceptions ………………………………… 65
3.1.1.9 Conditions…………………………………. 66
3.1.2 The BEx Query Designer…….. 67
3.1.3 Ad-hoc Query Designer ………. 69
3.2 BEX WEB ………………………………… 70
3.2.1 Web Application Design………. 70
3.2.1.1 BEx Web Application Designer ……… 71
3.2.1.2 Web Design API …………………………. 76
3.2.2 Analysis & Reporting: BEx Web Applications………………………………………………………… 78
3.2.2.1 Context Menu …………………………….. 78
3.2.2.2 Using Documents in Web Applications ………………………………………………………………………….. 81
3.2.2.3 Analyzing Business Data Using Map and Chart Web Items ……………………………………………… 81
3.2.2.4 Ad-hoc Query Designer ……………….. 82
3.2.2.5 Alert Monitor ………………………………. 83
3.2.2.6 Data Mining……………………………….. 83
3.2.2.7 Variables in Web Applications ………. 83
3.2.2.8 Standard Web Template for Ad-hoc Analysis …………………………………………………………………. 83
3.2.2.9 Accessibility ……………………………….. 84
3.2.2.10 Web Browser Dependencies …….. 84
3.2.3 BEx Web Analyzer……………… 84
3.2.4 BEx Mobile Intelligence ………. 85
3.2.4.1 Automatic Device Recognition ………. 87
3.2.4.2 Alert Scenario …………………………….. 87
3.2.4.3 Online Scenario ………………………….. 88
3.2.4.4 Offline Scenario ………………………….. 89
3.3 ANALYSIS & REPORTING: BEX ANALYZER……………………………………………………………………….. 90
3.3.1 BEx Toolbar ………………………. 91
3.3.1.1 OLAP Functions for Active Cells……. 92
3.3.2 Evaluating Query Data………… 93
3.3.3 Queries in Workbooks ………… 94
3.3.4 Precalculating Workbooks …… 94
3.4 FORMATTED REPORTING: CRYSTAL ENTERPRISE INTEGRATION …………………………………………… 94
3.5 BEX BROWSER ………………………….. 95
3.6 BEX INFORMATION BROADCASTING… 96
3.6.1 BEx Broadcaster………………… 97
3.6.1.1 Output Formats………………………….. 98
3.6.1.2 Filter Settings…………………………….. 99
3.6.1.3 Sending by E-mail……………………….. 99
3.6.1.4 Exporting to the Enterprise Portal ….. 99
3.6.1.5 Scheduling Broadcast Settings ……..100
3.6.2 Publishing Queries and Web Applications…………………………………………………………… 100
4 INTEGRATION INTO THE ENTERPRISE PORTAL…………………………………………………………. 102
4.1 INTEGRATION OPTIONS………………. 102
4.1.1 Displaying Content from SAP BW in the Enterprise Portal ……………………………………. 103
4.1.2 Calling Content from SAP BW in the Enterprise Portal …………………………………………. 104
4.1.3 Content from SAP BW in the Navigation Panel……………………………………………………. 104
4.1.4 Generating Content from SAP BW for the Enterprise Portal………………………………….. 104
4.1.5 Working with Content from SAP BW in the Enterprise Portal ………………………………… 105
4.2 UNIFICATION IN THE PORTAL: DRAG&RELATE WITH BW CONTENT IN THE ENTERPRISE PORTAL .. 106
5 DEVELOPMENT TECHNOLOGIES… 108
5.1 BI JAVA SDK …………………………… 108
5.1.1 Components of the SDK……. 109
5.1.1.1 Application Programming Interfaces 109
5.1.1.2 Documentation…………………………..109
5.1.1.3 Examples………………………………….109
5.1.2 Overview of the SDK Architecture ……………………………………………………………………… 109
5.2 BI JAVA CONNECTORS……………….. 110
6 GLOSSARY ………………………………… 112

DOWNLOAD THIS TUTORIAL