Wednesday, December 23, 2009

Number of records inserted in infocube

Table : RSMONICDP

To get the records inserted / updated in InfoCube by per request .

More tables of this kind :

RSSELDONE
RSSELDTP

RSSELMON

RSREQDONE
RSICCONT

To check bw objects to be repair in SAP BI

Goto TCode : SE38
TCode :SE38
Enter the Program : RSAR_RSISOSMAP_REPAIR

Click on execute button or F8.
here you will get the Check box Repair mode, check the check box and click on execute.

Monday, December 21, 2009

How to Remove Leading Zeros in Transformations

This can be done in many ways...
1) This can be very easily handled at the Info Object level by selecting 'ALPHA' conversion routine.

2) You can also tick the "Convesrion" option in Trasnfer Rules. This will perform same as ALPHA Converstion.

3) Some times you need to write an ABAP routine to remove leading zeros in transformations.

Here is the sample code to remove leading zeros for 0ALLOC_NMBR field

DATA: V_OUTPUT TYPE TRANSFER_STRUCTURE-ALLOC_NMBR.

CALL FUNCTION 'CONVERSION_EXIT_ALPHA_OUTPUT'
EXPORTING
INPUT = TRAN_STRUCTURE-ALLOC_NMBR
IMPORTING
OUTPUT = V_OUTPUT

RESULT = V_OUTPUT.


4) Another example ABAP Routine.

IF TRAN_STRUCTURE-ALLOC_NMBR+0(4) = '0000'.
RESULT = TRAN_STRUCTURE-ALLOC_NMBR+4(6).
ELSE.
RESULT = TRAN_STRUCTURE-ALLOC_NMBR.
ENDIF.

Saturday, December 19, 2009

BW Questions-V01

1. What is the difference between OLTP and OLAP?
OLTP Current data Short database transactions Online update/insert/delete Normalization is promoted High volume transactions Transaction recovery is necessary

OLAP Current and historical data Long database transactions Batch update/insert/delete Renormalization is promoted Low volume transactions Transaction recovery is not necessary

OLTP is nothing but OnLine Transaction Processing ,which contains a normalised tables and online data,which have frequent insert/updates/delete.
But OLAP(Online Analtical Programming) contains the history of OLTP data, which is, non-volatile ,acts as a Decisions Support System and is used for creating forecasting reports.



2. What is different type of multidimensional models?

3. What is the dimension?.

A grouping of those evaluation groups (characteristics) that belong together under a common superordinate term.
With the definition of an InfoCube, characteristics are grouped together into dimensions in order to store them in a star schema table (dimension table).


4. What is the fact and fact table?
Table in the center of an InfoCube star schema.
The data part contains all key figures of the InfoCube and the key is formed by links to the entries of the dimensions of the InfoCube.


5. Difference between key performance indicators (KPI) and key figures?.

Key Performance Indicators are quantifiable measurements, agreed to beforehand, that reflect the critical success factors of an organization. They will differ depending on the organization. A business may have as one of its Key Performance Indicators the percentage of its income that comes from return customers. A school may focus its Key Performance Indicators on graduation rates of its students. A Customer Service Department may have as one of its Key Performance Indicators, in line with overall company KPIs, percentage of customer calls answered in the first minute. A Key Performance Indicators for a social service organization might be number of clients assisted during the year.
Whatever Key Performance Indicators are selected, they must reflect the organization's goals, they must be key to its success,and they must be quantifiable (measurable). Key Performance Indicators usually are long-term considerations. The definition of what they are and how they are measured do not change often. The goals for a particular Key Performance Indicator may change as the organizations goals change, or as it get closer to achieving a goal.
. 6. Difference between star schema and Extended star schema?

The difference between star and extended star schemas:1) Master data is not reusable in STAR because it is inside a cube. i.e in star scheme Dimensional tables and Master data tables are same. these 2 are inside a cube. But in extended star shema master data tables are outside the cube. so these are reusable components. here master data tables and Dimensional tables are different.2) limited Analysis: in star schema the maximum number of master data tables are 16. But in extended star schema the maximum number of Dimensioanl tables are 16. we can assign maximum of 233 char to one dimensional table. in that way we can assign 233*16 Char.3) low performance: in star schema it is used ALPHA numeric data. in Extended star schema we are using numering data. like we are generating SID inorder to link with dimesional tables these are numeric data. so performance is low regarding star schema.

7.what is a dimension table in extended star schema and when it is exactly created and when it will get populated?.

8. What is a SID (Surrogate ID) table in extended star schema and when it is exactly created and when it will get populated?.

9. what are the limitation techniques of infocube or modeling in BW?.

10. what is Flexiable update and Direct Update and explain difference?.
When we load data to the data target at the level of info provider level then we go for flexible update (wheither it is M.D or T.D). When we load data to the data target at the level of Info Object we go for Direct update.

Main Deference is Direct updata = without Update rulesFlexible update= With update rules
Scenarios for Flexible Updating
1. Attributes and texts are delivered together in a file:
Your master data, attributes, and texts are available together in a flat file. They are updated by an InfoSource with flexible updating in additional InfoObjects. In doing so, texts and attributes can be separated from each other in the communication structure.
Flexible updating is not necessary if:
· texts and attributes are available in separate files/DataSources. In this case, you can choose direct updating if additional transformations using update rules are not necessary.
2. Attributes and texts come from several DataSources:
This scenario is similar to the one described above, only slightly more complex. Your master data comes from two different source systems and delivers attributes and texts in flat files. They are grouped together in an InfoSource with flexible updating. Attributes and texts can be separated in the communication structure and are updated further in InfoObjects. The texts or attributes from both source systems are located in these InfoObjects.
3. Master data in the ODS layer:
A master data InfoSource is updated to a master data ODS object business partner with flexible updating. The data can now be cleaned and consolidated in the ODS object before being re-read. This is important when the master data frequently changes.
These cleaned objects can now be updated to further ODS Objects. The data can also be selectively updated using routines in the update rules. This enables you to get views of selected areas. The data for the business partner is divided into customer and vendor here.
Instead you can update the data from the ODS object in InfoObjects as well (with attributes or texts). When doing this, be aware that loading of deltas takes place serially. You can ensure this when you activate the automatic updates in ODS object maintenance or when you perform the loading process using a process chain (see also Including ODS Objects in a Process Chain).
A master data ODS object generally makes the following options available:
· It displays an additional level on which master data from the whole enterprise can be consolidated.
· ODS objects can be used as a validation table for checking the referential integrity of characteristic valuables in the update rules.
· It can serve as a central repository for master data, in which master data is consolidated from various systems. They can then be forwarded to further BW systems using the Data Mart.


Direct update is generally used for Master data infoobject & Hoerarchies . Here no update rules are used, that means data from source system passes though transfer structure, rules, & communication structure directly to Data target i.e. InfoObject.

11. what are transfer rules and updates rules and difference?.
Why we are using update rule while loading the data from source syst?Why can not we directly load data from transfer rule to datatarget..?Update rules are after the infosource and before the data target. Transactional Data can not be loaded into the data target without passing through the update rules.Incase of master data Update rules are not required.Let us take one example. Say you have Customer quantity price revenue and date.you have the data cust, Qty, Prc and date extracted from Source system. Assume that you can not Extract Rev.In transfer rules you can apply some rules on Qty and Prc and Rev can be derived.Suppose if requirement is period also should be presented in the report.Then in the update rules by setting date in the time Ref char, sytem will give the period, week, month ets like values.Like wise depending upon the requirements you can use the update rules.These rules to be applied to fill the data target.As per the these rules data sits in the respective object locations.
The reason for having update rules would be:1. If a business logic lets say if a certain quantity > '5' - then rating is "A" needs to be implemented you would have to do it in all the transfer rules whereas in a update rule only once.2. You can use return tables in update rules which would split the incoming data package record into multiple ones. This is not possible in transfer rules. 3. Currency conversion is not possible in transfer rules.4. If you have a key figure that is a calculated one using the base key figures you would do the calculation only in the update rules.
WHAT ARE THE DIFFERENT TYPES OF TRANSFER RULES
4 types:1) InfoObject: Direct mapping2) Constants: A fixed value. 3) Formula: value is determined using a formula.4) Routine: ABAP programs

12. what is the update mode , update method update type for updating data into infocube?.

13. what is the PSA and advantages and disadvantages of PSA ?.

14. what are the fields of PSA?.

15.how many data sources can be assigned to in infosource?.

16. what are different transformation methods in transfer rules.?.

15. What is ER Diagram

The Entity-Relationship (ER) model was originally proposed by Peter in 1976 [Chen76] as a way to unify the network and relational database views. Simply stated the ER model is a conceptual data model that views the real world as entities and relationships. A basic component of the model is the Entity-Relationship diagram which is used to visually represents data objects. Since Chen wrote his paper the model has been extended and today it is commonly used for database design For the database designer, the utility of the ER model is: it maps well to the relational model. The constructs used in the ER model can easily be transformed into relational tables. it is simple and easy to understand with a minimum of training. Therefore, the model can be used by the database designer to communicate the design to the end user. In addition, the model can be used as a design plan by the database developer to implement a data model in a specific database management software.

why an infocube has maximum of 16 dimensions?
As the total charecterstics are 255, out of which 16 charecters are being allowed for foreign keys and 6 charecters are for sap default, for this sake we have only 16 dimention tables, of which 3 are again sap default (unit, time. Datapacket): so finally we have only 13 user dimentions.

Monday, December 14, 2009

Interview Questions:

1. Identify the statement(s) that is/are true. A change run...

a. Activates the new Master data and Hierarchy data
b. Aggregates are realigned and recalculated
c. Always reads data from the InfoCube to realign aggregates
d. Aggregates are not affected by change run

1: A, B

2. Which statement(s) is/are true about Multiproviders?

a. This is a virtual Infoprovider that does not store data
b. They can contain InfoCubes, ODSs, info objects and info sets
c. More than one info provider is required to build a Multiprovider
d It is similar to joining the data tables

2: A, B

3. The structure of the PSA table created for an info source will be...

a. Featuring the exact same structure as Transfer structure
b. Similar to the transfer rules
c. Similarly structured as the Communication structure
d. The same as Transfer structure, plus four more fields in the beginning

3: D

4. In BW, special characters are not permitted unless it has been defined using this transaction:

a. rrmx
b. rskc
c. rsa15
d. rrbs

4: B

5. Select the true statement(s) about info sources:

a. One info source can have more than one source system assigned to it
b. One info source can have more than one data source assigned to it provided the data sources are in different source systems
c. Communication structure is a part of an info source
d. None of the above

5: A, C

6. Select the statement(s) that is/are true about the data sources in a BW system:

a. If the hide field indicator is set in a data source, this field will not be transferred to BW even after replicating the data source
b. A field in a data source won't be usable unless the selection field indicator has been set in the data source
c. A field in an info package will not be visible for filtering unless the selection field has been checked in the data source
d. All of the above

6: A, C

7. Select the statement(s) which is/are true about the 'Control parameters for data transfer from the Source System':

a. The table used to store the control parameters is ROIDOCPRMS
b. Field max lines is the maximum number of records in a packet
c. Max Size is the maximum number of records that can be transferred to BW
d. All of the above

7: A

8. The indicator 'Do not condense requests into one request when activation takes place' during ODS activation applies to condensation of multiple requests into one request to store it in the active table of the ODS.

a. True
b. False

8: B

9. Select the statement(s) which is/are not true related to flat file uploads:

a. CSV and ASCII files can be uploaded
b. The table used to store the flat file load parameters is RSADMINC
c. The transaction for setting parameters for flat file upload is RSCUSTV7
d. None of the above

9: C

10. Which statement(s) is/are true related to Navigational attributes vs Dimensional attributes?

a. Dimensional attributes have a performance advantage over Navigational attributes for queries
b. Change history will be available if an attribute is defined as navigational
c. History of changes is available if an attribute is included as a characteristic in the cube
d. All of the above

10: A, C

11. When a Dimension is created as a line item dimension in a cube, Dimensions IDs will be same as that of SIDs.

a. True
b. False

11: A

12. Select the true statement(s) related to the start routine in the update rules:

a. All records in the data packet can be accessed
b. Variables declared in the global area is available for individual routines
c. Returncode greater than 0 will be abort the whole packet
d. None of the above

12: A, B, C

13. If a characteristic value has been entered in InfoCube-specific properties of an InfoCube, only these values can be loaded to the cube for that characteristic.

a. True
b. False

13: A

14. After any changes have been done to an info set it needs to be adjusted using transaction RSISET.

a. True
b. False

14: A

15. Select the true statement(s) about read modes in BW:

a. Read mode determines how the OLAP processor retrieves data during query execution and navigation
b. Three different types of read modes are available
c. Can be set only at individual query level
d. None of the above

15: A, B

BW Interview Questions

1) Please describe your experience with BEx (Business Explorer)
A) Rate your level of experience with BEx and the rationale for you’re self-rating

B) How many queries have you developed? :

C) How many reports have you written?

D) How many workbooks have you developed?

E) Experience with jump targets (OLTP, use jump target)

F) Describe experience with BW-compatible ETL tools (e.g. Ascential)

2) Describe your experience with 3rd party report tools (Crystal Decisions, Business Objects a plus)

3) Describe your experience with the design and implementation of standard & custom InfoCubes.

1. How many InfoCubes have you implemented from start to end by yourself (not with a team)?

2. Of these Cubes, how many characteristics (including attributes) did the largest one have.

3. How much customization was done on the InfoCubes have you implemented?

4) Describe your experience with requirements definition/gathering.

5) What experience have you had creating Functional and Technical specifications?

6) Describe any testing experience you have:

7) Describe your experience with BW extractors

1. How many standard BW extractors have you implemented?

2. How many custom BW extractors have you implemented?

8) Describe how you have used Excel as a compliment to BEx

A) Describe your level of expertise and the rationale for your self-rating (experience with macros, pivot tables and formatting)
B)

9) Describe experience with ABAP

10) Describe any hands on experience with ASAP Methodology.

11) Identify SAP functional areas (SEM, CRM, etc.) you have experience in. Describe that experience.

12) What is partitioning and what are the benefits of partitioning in an InfoCube?

A) Partitioning is the method of dividing a table (either column wise or row wise) based on the fields available which would enable a quick reference for the intended values of the fields in the table. By partitioning an infocube, the reporting performance is enhanced because it is easier to search in smaller tables. Also table maintenance becomes easier.

13) What does Rollup do?

A) Rollup creates aggregates in an infocube whenever new data is loaded.

14) What are the inputs for an infoset?

A) The inputs for an infoset are ODS objects and InfoObjects (with master data or text).

15) What internally happens when BW objects like Info Object, Info Cube or ODS are created and activated?

A) When an InfoObject, InfoCube or ODS object is created, BW maintains a saved version of that object but does not make it available for use. Once the object is activated, BW creates an active version that is available for use.

16) What is the maximum number of key fields that you can have in an ODS object?

A) 16.

17) What is the specific advantage of LO extraction over LIS extraction?

A) The load performance of LO extraction is better than that of LIS. In LIS two tables are used for delta management that is cumbersome. In LO only one delta queue is used for delta management.

18) What is the importance of 0REQUID?

A) It is the InfoObject for Request id. OREQUID enables BW to distinguish between different data records.

19) Can you add programs in the scheduler?

A) Yes. Through event handling.

20) What is the importance of the table ROIDOCPRMS?

A) It is an IDOC parameter source system. This table contains the details of the data transfer like the source system of the data, data packet size, maximum number of lines in a data packet, etc. The data packet size can be changed through the control parameters option on SBIW i.e., the contents of this table can be changed.

21) What is the importance of 'start routine' in update rules?

A) A Start routine is a user exit that can be executed before the update rule starts to allow more complex computations for a key figure or a characteristic. The start routine has no return value. Its purpose is to execute preliminary calculations and to store them in a global data structure. You can access this structure or table in the other routines.
22) When is IDOC data transfer used?

A) IDOCs are used for communication between logical systems like SAP R/3, R/2 and non-SAP systems using ALE and for communication between an SAP R/3 system and a non-SAP system. In BW, an IDOC is a data container for data exchange between SAP systems or between SAP systems and external systems based on an EDI interface. IDOCs support limited file size of 1000 bytes. So IDOCs are not used when loading data into PSA since data there is more detailed. It is used when the file size is lesser than 1000 bytes.

23) What is partitioning characteristic in CO-PA used for?

A) For easier parallel search and load of data.

24) What is the advantage of BW reporting on CO-PA data compared with directly running the queries on CO-PA?

A) BW has a better performance advantage over reporting in R/3. For a huge amount of data, the R/3 reporting tool is at a serious disadvantage because R/3 is modeled as an OLTP system and is good for transaction processing rather than analytical processing.

25) What is the function of BW statistics cube?

A) BW statistics cube contains the data related to the reporting performance and the data loads of all the InfoCubes in the BW system.

26) When an ODS is in 'overwrite' mode, does uploading the same data again and again create new entries in the change log each time data is uploaded?
A) No.

27) What is the function of 'selective deletion' tab in the manage->contents of an infocube?

A) It allows us to select a particular value of a particular field and delete its contents.

28) When we collapse an infocube, is the consolidated data stored in the same infocubeinfocube? or is it stored in the new

A) Data is stored in the same cube.

29) What is the effect of aggregation on the performance? Are there any negative effects on the performance?

A) Aggregation improves the performance in reporting.

30) What happens when you load transaction data without loading master data?

A) The transaction data gets loaded and the master data fields remain blank.

31) When given a choice between a single infocube and multiple InfoCubes with a multiprovider, what factors does one need to consider before making a decision?

A) One would have to see if the InfoCubes are used individually. If these cubes are often used individually, then it is better to go for a multiprovider with many cubes since the reporting would be faster for an individual cube query rather than for a big cube with lot of data.

32) How many hierarchy levels can be created for a characteristic info object?

A) Maximum of 98 levels.

33) What is open hub service?

A) The open hub service enables you to distribute data from an SAP BW system into external data marts, analytical applications, and other applications. With this, you can ensure controlled distribution using several systems. The central object for the export of data is the Infospoke. Using this, you can define the object from which the data comes and into which target it is transferred. Through the open hub service, SAP BW becomes a hub of an enterprise data warehouse. The distribution of data becomes clear through central monitoring from the distribution status in the BW system.

34) What is the function of 'reconstruction' tab in an infocube?

A) It reconstructs the deleted requests from the infocube. If a request has been deleted and later someone wants the data records of that request to be added to the infocube, one can use the reconstruction tab to add those records. It goes to the PSA and brings the data to the infocube.

35) What are secondary indexes with respect to InfoCubes?

A) Index created in addition to the primary index of the infocube. When you activate a table in the ABAP Dictionary, an index is created on the primary key fields of the table. Further indexes created for the table are called secondary indexes.

36) What is DB connect and where is it used?

A) DB connect is database connecting piece of program. It is used in connecting third party tools with BW for reporting purpose.

37) Can we extract hierarchies from R/3 for CO-PA?

A) No We cannot, “NO hierarchies in CO/PA�?.

38) Explain ‘field name for partitioning’ in CO-PA

A) The CO/PA partitioning is used to decrease package size (eg: company code)

39) What is V3 update method ?

A) It is a program in R/3 source system that schedules batch jobs to update extract structure to data source collectively.

40) Differences between serialized and non-serialized V3 updates

41) What is the common method of finding the tables used in any R/3 extraction

A) By using the transaction LISTSCHEMA we can navigate the tables.

42) Differences between table view and infoset query

A) An InfoSet Query is a query using flat tables.

43) How to load data from one InfoCube to another InfoCube ?

A) Thro DataMarts data can be loaded from one InfoCube to another InfoCube.

44) What is the significance of setup tables in LO extractions ?
A) It adds the Selection Criteria to the LO extraction.

45) Difference between extract structure and datasource

A) In Datasource we define the data from diff source sys,where as in extract struct it contains the replicated data of datasource n where in we can define extract rules, n transfer rules
B) Extract Structure is a record layout of InfoObjects.
C) Extract Structure is created on SAP BW system.

46) What happens internally when Delta is Initialized

47) What is referential integrity mechanism ?

A) Referential integrity is the property that guarantees that values from one column depend on values from another column.This property is enforced through integrity constraints.
48) What is activation of extract structure in LO ?

49) What is the difference between Info IDoc and data IDoc ?

50) What is D-Management in LO ?
A) It is a method used in delta update methods, which is based on change log in LO.

Variable Types

Types of Variables:

The type of variable being used. This determines the object that the variable represents as a placeholder for a concrete value.
Structure

There are different types of variables depending on the object for which you want to define variables. These types specify where you can use the variables.

· Characteristic Value Variables

Characteristic value variables represent characteristic values and can be used wherever characteristic values are used.

If you restrict characteristics to specific characteristic values, you can also use characteristic value variables.

· Hierarchy Variables

Hierarchy variables represent hierarchies and can be used wherever hierarchies can be selected.

If you restrict characteristics to specific hierarchies or select presentation hierarchies, you can also use hierarchy variables.

· Hierarchy Node Variables

Hierarchy node variables represent a node in a hierarchy and can be used wherever hierarchy nodes are used.

If you restrict characteristics to specific hierarchy nodes, you can also use hierarchy node variables.

· Text Variables

Text variables represent a text and can be used in descriptions of queries, calculated key figures and structural components.

key performance indicators

If you are struggling to define your company's key performance indicators (KPIs), here is a useful bit of information. I recently discovered an interesting website dedicated to identifying KPIs for just about every category you can think of. And it is FREE!

Yes, indeed -- a website described as "The free Key Performance Indicator (KPI) Library is a community of business professionals that provides guidance in identifying and prioritizing the KPIs that really matter for your organization's success." The categories include business, compliance & legislation, environmental, finance, HR, IT, outsourcing, procurement, project portfolio, R & D, supply chain & logistics, and many others. Warning -- you have to register to use the site but I found it quite useful.

The site contains (at the time of this posting) 943 KPIs. Here are a few by name:

Market share gain comparison %
Ad click-through ratio (CTR)
Cash dividends paid
Share price
Perfect Order Measure
Average customer recency
Average number of trackbacks per post
Number of past due loans
% of service requests posted via web (self-help)
Total energy used per unit of production
Cumulative Annual Growth Rate (CAGR)

With each entry, you get a definition, the category it belongs to, and the ability to share it using any number of book marking applications. You can also comment on the KPI and add your own to the listing. As an example, here is the entry for Cumulative Annual Growth Rate (CAGR):

"This tells the story about any company as to what rate the company has grown over years irrespective of consistency in growth YoY basis. A company might have one successful year and then a bad year. If you compare the growth rate YoY basis it may give a different picture and be concluded as lack of consistency in management. But if one looks at the CAGR, it will explain the real growth over years. It is calculated as:

=Power(Revenue Year (n)/Revenue Year(1),1/n) - 1

Where, Revenue Year (n)= n-th year Revenue Revenue Year(1) = 1st year Revenue"

And on it goes...

How to retain deltas when you change LO extractor in Production system

Requirement may come up to add new fields to LO cockpit extractor which is up & running in production environment. This means the extractor is delivering daily deltas from SAP R/3 to BW system .

Since this change is to be done in R/3 Production system, there is always a risk that daily deltas of LO cockpit extractor would get disturbed. If the delta mechanism is disturbed (delta queue is broken) then there no another way than doing re-initialization for that extractor. However this re-init is not easy in terms of time & resource. Moreover no organization would be willing to provide that much downtime for live reporting based on that extractor.

As all of us know that initialization of LO Extractor is critical, resource intensive & time consuming task. Pre-requisites to perform fill setup tables are - we need to lock the users from transactional updates in R/3 system, Stop all batch jobs that update the base tables of the extractor. Then we need to schedule the setup jobs with suitable date ranges/document number ranges.

We also came across such scenario where there was a requirement to add 3 new fields to existing LO cockpit extractor 2LIS_12_VCITM. Initialization was done for this extractor 1 year back and the data volume was high.

We adopted step by step approach to minimize the risk of delta queue getting broken /disturbed. Hope this step by step procedure will help all of us who have to work out similar scenarios.

Step by Step Procedure:-

1.Carry out changes in LO Cockpit extractor in SAP R/3 Dev system.
As per the requirement add new fields to Extractor.
These new fields might be present in standard supporting structures that you get when you execute "Maintain Data source" for extractor in LBWE. If all required fields are present in supporting structure mentioned above then just add these fields using arrow buttons provided and there is no need to write user exit code to populate these new fields.
However if these fields (or some of the required fields) are not present in supporting structures then you have to go for append structure and user exit code. The coding in user exit is required to populate the newly added fields. You have to write ABAP code in User exit under CMOD & in Include ZXRSAU01.
All above changes will ask you for transport request. Assign appropriate development class/Package and assign all these objects into a transport request.

2.Carry out changes in BW Dev system for objects related to this change.
Carry out all necessary changes in BW Dev system for objects related to this change (Info source, transfer rules, ODS, Info cubes, Queries & workbooks). Assign appropriate development class/Package and assign all these objects into a transport request

3.Test the changes in QA system.
Test the new changes in SAP R/3 and BW QA systems. Make necessary changes (if needed) and include them in follow-up transports.

4.Stop V3 batch jobs for this extractor.
V3 batch jobs for this extractor are scheduled to run periodically (hourly/daily etc) Ask R/3 System Administrator to put on hold/cancel this job schedule.

5.Lock out users, batch jobs on R/3 side & stop Process chain schedule on BW.
In order to avoid the changes in database tables for this extractor and hence possible risk of loss of data, ask R/3 System Administrator to lock out the users. Also batch job schedule need to be put on hold /cancel.
Ask System Administrator to clear pending queues for this extractor (if any) in SMQ1/SMQ2. Also pending /error out v3 updates in SM58 should be processed.
On BW production system the process chain related to delta Info package for this extractor should be stopped or put on hold.

6.Drain the delta queue to Zero for this extractor.
Execute the delta Info package from BW and load the data into ODS & Info cubes. Keep executing delta Info package till you get 0 records with green light for the request on BW side. Also you should get 0 LUW entries in RSA7 for this extractor on R/3 side.

7.Import R/3 transports into R/3 Production system.
In this step we import R/3 transport request related to this extractor. This will include user exit code also. Please ensure that there is no syntax error in include ZXRSAU01 and it is active. Also ensure that objects such as append structure is active after transport.

8.Replicate the data source in BW system.
On BW production system, replicate the extractor (data source).

9.Import BW transport into BW Production system.
In this step we import BW transport related to this change into BW Production system.

10.Run program to activate transfer rules
Execute program RS_TRANSTRU_ACTIVATE_ALL. Enter the Info source and source system name and execute. This will make sure that transfer rules for this Info source are active

11.Execute V3 job Manually in R/3 side
Go to LBWE and click on Job Control for Application area related to this extractor (for 2LIS_12_VCITM it is application 12). Execute the job immediately and it should finish with no errors.

12.Execute delta Info package from BW system
Run delta Info package from BW system. Since there is no data update, this extraction request should be green with zero records (successful delta extraction)

13.Restore the schedule on R/3 & BW systems
Ask System Administrator to resume V3 update job schedule, batch job schedule and unlock the users. On BW side, restore the process chains schedule.

From next day onwards (or as per frequency), you should be able to receive the delta for this extractor with data also populated for new fields.

What Is SPRO In BW Project?


1) What is spro?


1. SPRO is the transaction code for Implementation Guide, where you can do configuration settings.
* Type spro in the transaction box and you will get a screen customizing :
Execute Project.
* Click on the SAP Reference IMG button. you will come to Display IMG Screen.
* The following path will allow you to do the configuration settings :
SAP Cutomizing Implementation Guide -> SAP Netweaver ->SAP Business Warehouse Information.
2) How to use in bw project?
2. SPRO is used to configure the following settings :
* General Settings like printer settings, fiscal year settings, ODS Object Settings, Authorisation settings, settings for displaying SAP Documents, etc., etc.,
* Links to other systems : like links between flat files and BW Systems, R/3 and BW, and other data sources, link between BW system and Microsoft Analysis services, and crystal enterprise....etc., etc.,
* UD Connect Settings : Like configuring BI Java Connectors, Establishing the RFC Desitination for SAP BW for J2EEE Engine, Installation of Availability monitoring for UD Connect.
* Automated Processes: like settings for batch processes, background processes etc., etc.,
* Transport Settings : like settings for source system name change after transport and create destination for import post-processing.
* Reporting Relevant Settings : Like Bex Settings, General Reporting Settings.
* Settings for Business Content : which is already provided by SAP.

3) What is difference between idoc and psa in transfer methods?
3. PSA : Persistant Staging Area : is a holding area of raw data. It contains detailed requests in the format of the transfer structure. It is defined according to the Datasource and source system, and is source system dependent.

IDOCS : Intermediate DOCuments : Data Structures used as API working storage for applications, which need to move data in or out of SAP Systems.

V3 update: Questions and answers

Question 1
Update records are written to the SM13, although you do not use the extractors from the logistics cockpit (LBWE) at all.
Active datasources have been accidentally delivered in a PI patch.For that reason, extract structures are set to active in the logistics cockpit. Select transaction LBWE and deactivate the active structures. From now on, no additional records are written into SM13.
If the system displays update records for application 05 (QM) in transaction SM13, even though the structure is not active, see note 393306 for a solution.

Question 2
How can I selectively delete update records from SM13?
Start the report RSM13005 for the respective module (z.B. MCEX_UPDATE_03).

* Status COL_RUN INIT: without Delete_Flag but with VB_Flag (records are updated).

* Status COL_RUN OK: with Delete_Flag (the records are deleted for all modules with COL_RUN -- OK)

With the IN_VB flag, data are only deleted, if there is no delta initialization. Otherwise, the records are updated.
MAXFBS : The number of processed records without Commit.

ATTENTION: The delta records are deleted irrevocably after executing report RSM13005 (without flag IN_VB). You can reload the data into BW only with a new delta-initialization!

Question 3
What can I do when the V3 update loops?
Refer to Note 0352389. If you need a fast solution, simply delete all entries from SM13 (executed for V2), however, this does not solve the actual problem.

ATTENTION: THIS CAUSES DATA LOSS. See question 2 !

Question 4
Why has SM13 not been emptied even though I have started the V3 update?

* The update record in SM13 contains several modules (for example, MCEX_UPDATE_11 and MCEX_UPDATE_12). If you start the V3 update only for one module, then the other module still has INIT status in SM13 and is waiting for the corresponding collective run. In some cases, the entry might also not be deleted if the V3 update has been started for the second module.In this case, schedule the request RSM13005 with the DELETE_FLAG (see question 2).

* V3 updating no longer functions after the PI upgrade because you did not load all the delta records into the BW system prior to the upgrade.Proceed as described in note 328181.

Question 5
The entries from SM13 have not been retrieved even though I followed note 0328181!
Check whether all entries were actually deleted from SM13 for all clients. Look for records within the last 25 years with user * .

Question 6
Can I schedule V3 update in parallel?
The V3 update already uses collective processing.You cannot do this in parallel.

Question 7
The Logistics Cockpit extractors deliver incorrect numbers. The update contains errors !
Have you installed the most up-to-date PI in your OLTP system?
You should have at least PI 2000.1 patch 6 or PI 2000.2 patch 2.

Question 8
Why has no data been written into the delta queue even though the V3 update was executed successfully?
You have probably not started a delta initialization. You have to start a delta initialization for each DataSource from the BW system before you can load the delta.Check in RSA7 for an entry with a green status for the required DataSource. Refer also to Note 0380078.

Question 9
Why does the system write data into the delta queue, even though the V3 update has not been started?
You are using the automatic goods receipt posting (transaction MRRS) and start this in the background.In this case the system writes the records for DataSources of application 02 directly into the delta queue (RSA7).This does not cause double data records.This does not result in any inconsistencies.

Question 10
Why am I not able to carry out a structural change in the Logistics Cockpit although SM13 is blank?
Inconsistencies occurred in your system. There are records in update table VBMOD for which there are no entries in table VBHDR. Due to those missing records, there are no entries in SM13. To remove the inconsistencies, follow the instructions in the solution part of Note 67014. Please note that no postings must be made in the system during reorganization in any case!

Question 11
Why is it impossible to plan a V3 job from the Logistics Cockpit?
The job always abends immediately. Due to missing authorizations, the update job cannot be planned. For further information see Note 445620.

What is designer and creation of Universe?

What is Designer?

Designer is a BusinessObjects IS module used by universe designers to create and maintain universes. Universes are the semantic layer that isolates end users from the technical issues of the database structure. Universe designers can distribute universes to end users by moving them as files through the file system, or by exporting them to the repository.

BO Universe is essentially a connection layer sitting between the source data and the DW. It is defined by the data mapping or schema or the relationship between database tables. Each universe is accessed by certain category of users. For example, finance people will access finance universe, sales people will access sales universe. The analogy is similar to a data mart.

The advantge of the BO universe is that if there are any changes in the source data structure, this change needs to be made only in the Universe and its effect gets pushed down to all the reports emanating from this universe. A good universe design helps is improving speed and contributes to the Best Practices using BO.

How do you design a universe?

The design method consists of two major phases.
During the first phase, you create the underlying database structure of your universe. This structure includes the tables and columns of a database and the joins by which they are linked. You may need to resolve loops which occur in the joins using aliases or contexts. You can conclude this phase by testing the integrity of the overall structure.
During the second phase, you can proceed to enhance the components of your universe. You can also prepare certain objects for multidimensional analysis. As with the first phase, you should test the integrity of your universe structure. Finally, you can distribute your universes to users by exporting them to the repository or via your file system.

Need to delete data...? for enhancing datasource..?

No need to delete any data in BW or R/3 side for enhancing datasource if loading data into ODS in overwrite mode.

Simply follow the below steps:

1. Add new fields to ODS & cubes and adjust update rules.

2. Clear LBWQ(update queue - Run V3 job only).

3. Clear RSA7(Delta queue - Run infopak to pull data into bw)

4. Move datasource changes to Production, replicate and activate transfer rules.

5. Delete data in LBWG(setup tables).

6. Fill setup tables for historic data.

7. Initialize datasource if required, without data transfer(Zero initialization).

8. Pull data from R/3 to BW ODS in overwrite mode with Repair Full option.

Loading data in overwrite mode, so no problem, Just load again historic data as well. and push delta from ODS to further(ods/cube).

9. Push delta from ODS to CUBE.

Tuesday, December 1, 2009

To create a DBLINK using Derived Tables

1. Create a DBLINK in Oracle on Server1 with the following statement:
2. CREATE DATABASE LINK dblink_name CONNECT TO user_name_on_server2 IDENTIFIED BY password USING 'connect_string_to_server2';
3. Create a synonym for the DBLINK on Server1 using the following statement:
4. CREATE SYNONYM synonym_name FOR user_name_on_server2.table_name_on_server2@dblink_name_server2
5. Ensure the synonym for the linked database on Server1 is added to the tnsnames.ora file of the target database on Server2. If not, Oracle will return the ORA-12154 error message.
6. Log in to Server1.
7. Query the DBLINK synonym using the following SQLPlus statement:
8. SELECT * FROM synonym_name
9. Log into Designer.
10. Click Insert Table > Derived Tables.
11. Query the DBLINK synonym using the following SQLPlus statement:
12. SELECT * FROM synonym_name
13. If an error is returned, close the Derived Tables dialog box and reopen. If the message "Parse OK" is returned, then click OK.

Tuesday, November 24, 2009

How to insert two queries into one BEx Analyzer workbook.

I frequently need to use data form different InfoProviders. Sometimes, instead creating MultiProvider, it is faster to put two or more queries into one workbook and create separate tab to display joined data. Here are 7 steps to create such a solution:

1. Create queries you would like to join.
2. Open one of the queries in BEx Analyzer and save it as a workbook.
3. Create two additional tabs in the workbook, and give them names (e.g., query2, results)
4. Edit the query2 tab by adding design items: click BEx Analyzer > Design Toolbar > Insert Analysis Grid
5. Click on the Properties dialog box, change Data Provider's name and click create button.
6. Choose the second query and confirm your choice.
7. Create a table, on the result tab, that merge data form both queries. Save the workbook.

What about selection screen? The variables related to the queries will be displayed on one selection screen, if you use the same variable in the queries - there will be only one field for the shared variables.

Use of Analysis Process Designer in BI7

Everyone who worked with BI 7.0 knows that Analysis Process Designer (APD) is a workbench for creating, executing, and monitoring analysis processes. The analysis process is primarily based on data that was consolidated in the Data Warehouse and that exists in InfoProviders. One of the applications of APDs from a technical point of view would be feeding query results into a DataStore object or an attribute of a characteristic. In this post I review a few examples on how consultants may use APDs for addressing particular analysis tasks.



Analysis Process Designer allows you to set up a model where you move data from source to target and do some transformations on the way. As a source we can use any InfoProvider in the data model. The following types of data target are available in the Analysis Process Designer:

● Attributes of a characteristic

● DataStore objects

● Files

● CRM attributes

● Target groups for SAP CRM

● Data mining models:

○ Training the decision tree

○ Training the clustering model

○ Training the scoring model (regression)

○ Training data mining models from third parties

○ Creating association analysis models


1. Examples of business applications
1.1. ABC classification for customers

In ABC classification we assign customers to certain categories based on business rules. For example, you can classify your customers into three classes A, B and C according to the sales revenue or profit they generate. When you choose ABC classification in APD you have to specify the characteristic for which the classification is to be performed, its attribute, key figure, appropriate query, and threshold values for the individual ABC classes.
1.2. Scoring (traffic light) model

In a number of BI scenarios we may have a requirement for generating scoring or traffic light indicators for a certain set of KPIs. We may want to know, for example, how close the actual value is to the budgeted one. A range of traffic lights (red/yellow/green) needs to be displayed by geography, product group, profit center, etc.



Traffic light indicators need to be assigned to each report line based on a complex logic. For example, if one or two countries in the region are underperforming, region’s indicator is set to yellow. If more then two countries are underperforming region’s indicator for the analyzed period should be set to red.



As values for traffic light indicators are not cumulative they have to be calculated separately for each level of granularity. Knowing indicators at the lowest level of granularity does not help much in deriving them for upper levels, as there is a business rule defined for each level separately. Therefore, we have to build a set of queries for each level of data model where traffic light indicators need to be displayed. APD would help us feeding query results into the cube reporting on scoring results.


2. Example of data flow for scoring model

The following data flow model can be used for calculating scoring results. The infocube contains measures (KPIs) used for scoring, such as sales volume and sales budget. It also has a set of traffic light KPIs that need to be populated with indicators for each granularity level.


3. Why using APD in the scoring model

It is important to note that in the scoring model instead of APD/Query approach one can use a transformation (formerly known as an update rule) connecting cube to itself. In the start/end routine we can build business logic required for scoring results calculations:

image011.jpg

However, this approach requires complex development in ABAP. Specific scoring requirements have to be documented by a business user in advance, which usually makes development cycle longer. Any adjustments to the scoring logic require ABAP code modifications.



Alternatively, when we use Query/APD approach, analysts are able to define scoring requirements in the queries, test and modify them whenever it is needed. They can also run queries and check preliminary results. Needless to say, it is usually easier to modify and test queries rather than transformations with ABAP code.

Monday, November 23, 2009

Why do we need to debug: BREAK POINT

Definition:
The BREAK POINT is a debugging aid. When we run a program normally, it is interrupted at the statement, and the system automatically starts the debugger, allowing you to display the contents of any fields in the program and check how the program continues. If the program is running in the background or in an update task, the system generates a system log message.

Break point types:

1. Static
2. Dynamic
1. Directly set
2. Specially set
i. At statement
ii. At event
iii. At function module
iv. At system exceptions
Static Breakpoint

Written in ABAP program.
Should be used in development environment only.

If sy-subrc eq 0.

break-point.

endif .


Dynamic Breakpoint

* User specific.
* Can be set/deactivated/deleted at runtime.
* Deleted automatically when the user logs off from R/3 system.
* Can be set even when a program is locked by other programmer.
* Logics can be built while defining it.


Different ways of putting the Break-point in the Program


1) Writing the Break point in the program.
2) Writing the Break statement along with the user name in the program Break sy-uname.
3) By Clicking the Red Button on a line, one can create the Break point at that line.

How to Create/Delete Break Point

* You can set the break point by duble clicking on the statement during the debugging in
* You can set the break point through menu option: place the cursor on the statement where you want to put the break point.
o Goto the menu bar > Break point > select the Create/delete.
* After creating the break point select Ctrl+S / Select Save in the menu bar - to save all the break points for that session.
* Viewing all the break points in a ABAP Program.
o For viewing all the break point in the ABAP Program: Goto – Utilities - Break points - Display
* If we want to delete the selected break point then click on the display selected Object.
* After deleting all the break points, the table will be empty and no break point is left in the table.

How to Activate/Deactivate Break point

1. Activate/deactivate all: It will activate the single deactivated break point and vice-versa.
2. Delete All: It will delete all the break points, which is in the programs
3. Deactivate All: It will deactivate all the break points in the program.
4. Activate All: It will activate all the deactivated break points.
5. Save: After setting the break point, by clicking the save button it will save the break point in the program.



One can put a break-point at a Statement, Subroutine, Function Module and exception always at runtime.

If in a specific program there is one function module and two subroutines, in that case

The Program will stop at the statement select.

The Program will stop at subroutine.
The Program will stop at Function Module

Thursday, November 19, 2009

FAQ - Information Broadcasting and General

Where can I get up-to-date information about broadcasting using an Enterprise Portal?

See SAP Note 969040.
Which browsers, e-mail servers, and clients support MHTML?

SAP cannot provide a complete list of the software that supports MHTML. Please clarify this with your software vendor. Some of the systems that support MHTML are listed below, but it is still necessary to clarify the details with the vendor.
MHTML format (MIME Encapsulation of Aggregate HTML Documents, see ftp://ftp.ietf.org/rfc/rfc2557.txt), is supported by the following Web browsers:
• Microsoft Internet Explorer
In addition, it is supported by the following email servers and clients:
• Microsoft Outlook
• IBM Lotus Domino 6 (partial)
• BM Lotus Domino Everyplace 3.0
What do I need to consider in my support package stack upgrade planning when using information broadcasting to the Portal?

For information broadcasting to work properly, you need to have the same support package stack on both the BI system and the SAP NetWeaver Portal (with Knowledge Management). Upgrades need to be done simultaneously on both sides.
Can I broadcast e-mails to distribution lists that are defined in a groupware system (e.g. MS Exchange, Lotus Notes)?

Yes. For more information, see SAP Note 834102 (SMP login required).
What is the best approach and the best resources to learn about Information Broadcasting?

We recommend the following resources (in this order):
1. For an overview, see the E-Learning Maps on BI in SAP NetWeaver 7.0, http://service.sap.com/RKT-NetWeaver (SMP login required). Information broadcasting is documented to a great detail in standard BI documentation.
2. If you integrate broadcasting with the SAP NetWeaver Portal, you have to implement the required settings for BI in the IMG (transaction SPRO) under "SAP Transaction SPRO/SAP Reference IMG ->SAP Customizing Implementation Guide ->SAP NetWeaver ->Business Intelligence ->Reporting-Relevant Settings ->Web-Based Settings ->Integration into Portal". The Customizing step, "Overview: Integration into Portal" contains a detailed description of the steps and settings required in BI and Portal.

What is the pricing policy on information broadcasting?

Contact your local account executive for details.
Can I broadcast graphs in addition to tabular reports?

Yes. When you broadcast a report, it is broadcast with the current display.
Can I use a single report to distribute filtered or user or group-specific information to each individual user?

Yes. there are two possibilties:
1. User-specific broadcasting based on existing users.
2. Data-bursting based on user information in BI master-data.
For example, a single cost center report that is broadcast to all CC managers once a week; regional sales report that broadcasts only the regional results for each group of sales people and sales managers see all groups for which they are responsible. For instructions, see the standard SAP BI documentation.
Can I broadcast at any time?

Yes. Depending on the authorization settings of your SAP NetWeaver BI system, users can set up their own ad hoc schedules.
Can I change my broadcast settings after I set them up?

Yes. From the "information broadcasting" tab page of the BEx Web Analyzer, you can select "Overview of Scheduled Settings" to manage all broadcasts you have authorizations to control.
Do individual users see customized precalculated results for their broadcast report (such as only their region, only their cost center, only their benefits information)?

Yes. Authorizations can be leveraged by the broadcasting process to narrow the results of each individual user. Additionally, you can use data bursting to tailor the result of broadcasts even if the recipients are not known BI users.
Do all users have to be defined in the SAP NetWeaver BI system in order to broadcast to them?

No. E-mail addresses can also be targeted for broadcasts. Using data bursting, you can even send personalized broadcasts to non-BI users.
Can I compress output to avoid issues with e-mail size limitations?

Yes. SAP NetWeaver BI provides a zipping service that can be applied to broadcasts.
Can I subscribe to broadcast results and be notified when new broadcasts are distributed?

Yes. This is one of the advantages of incorporating Knowledge Management services of SAP NetWeaver Portal. When broadcasts are sent to the portal, reports become KM documents to which the user can subscribe.
Can I incorporate existing corporate email groups and or users into the broadcasting wizard?

Broadcasts can be sent to registered SAP NetWeaver BI users, SAP NetWeaver BI roles, and external e-mail accounts. Corporate e-mail groups have to be imported by copying and pasting the e-mail address into the Broadcaster. They will be stored there in the user's history for reuse.
Can I broadcast from one language (such as English) to other languages?

Yes. This assumes all language-relevant elements of the report have been maintained in the target language (such as texts and hierarchies).
Can I broadcast only to myself and not to everyone?

Yes. You can either send an e-mail broadcast to yourself or to your personal portfolio (KM folder) in SAP NetWeaver Portal. In both cases, the broadcast can be a single, immediate broadcast or a regularly scheduled broadcast.
We have thousands of documents in SAP NetWeaver BI Content framework - is there a migration path to get these into KM? Is there a way to access these directly without having to open the related SAP NetWeaver BI report(s)?

A migration process will be available in the mid future.
In SAP BW 3.5, documents can be accessed from within the KM function of Enterprise Portal using corresponding NetWeaver BI repository managers. New documents can also be created and stored in SAP NetWeaver BI Content framework from within KM using these repository managers.
With BI in SAP NetWeaver 7.0, you can migrate your documents from BI to KM (see the documentation under "Business Intelligence ->Data Warehousing ->Data Warehouse Management/Documents ->Working with Documents in Knowledge Management").
Can I setup the broadcast so the current date will be included in the broadcast header?

Yes. Variables can be incorporated into the text of the broadcast to provide the date or time of the broadcast in the header of the broadcast, if required. This can also be combined with free text for more flexible and descriptive broadcast headers.
Does Reporting Agent alerting have a migration path to the new broadcasting-based alerting? How is information broadcasting integrated?

Reporting Agent settings are still supported in SAP NetWeaver 7.0. Your existing scenarios still run. For new alert scenarios, we recommend using the BEx Broadcaster instead of the Reporting Agent. There is no migration of Reporting Agent settings to Broadcaster settings.
Can I use information broadcasting to distribute precalculated queries, Web applications, and workbooks to a third-party file server, Web server or document management systems?

Yes. With information broadcasting, you can precalculate queries, Web applications, and workbooks and publish them into the Knowledge Management of the SAP NetWeaver Portal.
In KM, you can easily create a Repository Manager (CM repository with persistence mode FSDB) that is attached to a file system directory (for example, the directory of an Internet Information Server (IIS)). You have to create a link in the KM folder of documents to the folder of the CM Repository attached to the file system or you can define your CM Repository as an entry point in KM. For more information, see SAP Note 827994 (SMP login required).
Information broadcasting can automatically put a new report on the third-party file server (for example, using the data change event in the process chain). KM offers repository managers for many different file servers, Web servers, and document management systems (such as IIS and Documentum):
1. Create CM Repository attached to file system.
2. Use iView KM Content to create subfolder in file system (optional).
3. Set permission to Administrator (optional).
4. Create link in /documents to folder of CM Repository attached to file system or define CM Repository as entry point. (See SAP Note 827994.)
5. Schedule Broadcasting Settings that export to a linked folder of CM Repository.
Because documents created via Information Broadcasting have additional attributes attached to them which mark them as broadcasted documents, it is not possible to store these kind of documents in a "pure" file system repository because such a repository usually only stores properties like "last changed", "creator", etc. Fortunately, KM provides a mechanism to nevertheless use a file system repository to store the documents. The additional properties will be stored in the database. Details are given here and here.
The "persistence mode" of the repository must be "FSDB" to allow this kind of behavior. Please note that because of the distributed storage of file and additional properties, the property assignment will be lost when moving around the document in the file system using some non-KM tool like windows explorer.
Are there any new hardware or sizing needs regarding information broadcasting in SAP NetWeaver 7.0 compared to the SAP BW 3.x function?

An information broadcaster query is treated exactly like a normal query. There are no additional hardware requirements if you broadcast queries that take hardware calculations into account. If, however, there is a need for a significant number of additional broadcasting queries, consider reviewing your sizing.
In addition, using Broadcasting not only using e-mail but also with the SAP NetWeaver Portal requires the additional installation of a J2EE Server with SAP NetWeaver Portal or KM.
I plan to schedule the broadcast of a fixed number of documents on a regular basis. How can I calculate the system requirements needed?

To find out your sizing needs, you can use the Quick Sizer Tool (http://service.sap.com/quicksizer - SMP login required). The load caused by pre-calculation of queries must be mapped to an adequate number of virtual users.
Example: the load caused by precalculating of 100 queries in 2 hours can be simulated by 50 users of category "InfoConsumer" since, by definition, each "InfoConsumer" causes the load of one navigation step per hour.
Is there a performance difference in accessing SAP NetWeaver BI queries using online links (URLs) in a KM folder or directly using a URL within a Web browser?

There should not be any difference in performance.
What is important to know regarding J2EE memory consumption of information broadcasting scenarios?

It is recommended to allocate at least 1 GB heap size for the J2EE Engine. The lower the heap size, the time needed for full garbage collection. Frequent full garbage collection should be avoided. As a rule of thumb, the J2EE engine should not spend more than 5% of its CPU time on garbage collection.
How do the broadcast channels E-mail and SAP EP KM Folders compare performancewise?

Sending broadcast reports by e-mail is 50-70% faster than deployment into the SAP NetWeaver Portal in the current support package stack for SAP NetWeaver 7.0.
We want to take advantage of information broadcasting. How can the contents sent, such as Excel workbooks, MHTs, PDFs be encrypted automatically?

The Broadcaster itself does not provide encryption. However, e-mails are sent out using an external SMTP server. Check if your SMTP server provides encryption and see also SAP Note 149926.
How can I broadcast from an SAP NetWeaver 7.0 BI system into SAP Enterprise Portal 6.0?

There are several options available to broadcasts into a SAP Enterprise Portal 6.0. We recommend using a WebDAV Repository Manager. For more information, see SAP Note 969040.
Can I broadcast from an SAP NetWeaver 7.0 BI system into SAP Enterprise Portal 5.0?

No. This is not possible, and no workaround is supported.
How can I broadcast from a SAP NetWeaver 7.0 BI system into a federated portal network?

There are several options available to broadcasts into a federated portal network. We recommend using a WebDAV Repository Manager. For more information, see SAP Note 969040.
Can I display broadcasts of the same SAP NetWeaver 7.0 BI system that are triggered using ABAP runtime and using Java runtime to the same KM folder in a federated portal?

Yes. Both types of broadcasts can be broadcast to the same KM folder in the local portal of the BI system. Using remote role assignment, the Business Explorer showcase role of the local portal can be displayed within the federated portal. The local portal acts as the producer and the federated portal acts as the consumer. The Business Explorer showcase role displays typical KM folders using BEx portfolio.

Tuesday, November 17, 2009

ABAP Tips and Tricks Database

http://wiki.sdn.sap.com/wiki/display/ABAP/ABAP+Tips+and+Tricks+Database

Date/time operations in ABAP.

With ABAP, you can do simple date calculations. If you need some advanced things like adding a month (not simple 30 days) to a date, SAP provides many function modules to do the job. Here’s my list of ABAP date functions, their names usually explain what they do:

CALC_DIFF_IN_MONTHS_DAYS
COMPUTE_YEARS_BETWEEN_DATES
DATE_CHECK_PLAUSIBILITY
DATE_COMPUTE_DAY
DATE_CONV_EXT_TO_INT
DATE_CONVERT_TO_FACTORYDATE
DATE_GET_WEEK
DATE_TO_PERIOD_CONVERT
FIRST_DAY_IN_PERIOD_GET
HOLIDAY_GET – holidays list for a plant
L_MC_TIME_DIFFERENCE – Calculate time difference in minutes
LAST_DAY_IN_PERIOD_GET
MONTH_NAMES_GET
MONTH_PLUS_DETERMINE
PERIOD_AND_DATE_CONVERT_OUTPUT
RP_ASK_FOR_DATE
RP_CALC_DATE_IN_INTERVAL
RP_LAST_DAY_OF_MONTHS
SD_DATETIME_DIFFERENCE
WEEK_GET_FIRST_DAY – convert YYYYWW to date

Monday, November 16, 2009

How To: Trigger Background Jobs with Background User

Summary
Most of the processing in SAP happens in term of background jobs. Some of these jobs are very critical and need to run within specified duration. In production support environment, many a times need arises to repair failure of these jobs. Sometimes our User ID lacks the authorization to run some of these jobs and hence results in missing SLAs or dependencies. The blog further will explain step by step solution how to trigger such jobs with background User ID which has most of the authorizations required.
Step by Step Solution
Identifying Background Job
Using TCode SM37, with filter set to respective User ID and job type as scheduled/released; identify the correct job that needs to be scheduled with Background User ID.
Goto Change Options
Select the appropriate job and then from menu options, select Job -> Change or use CTRL+F11. This will open up the job definition screen; here hit the ‘step’ button as shown.

Step List Overview
The Step button will lead to Step List Overview screen. Here, simply click on the job and hit the change button (CTRL+SHIFT+F7).

Changing User
In the next screen simply change the User from IDADMIN to ALEREMOTE and save the job.

Authentication
This approach is a safe approach as though we can change the user ID of any job but the user ID with which we created the job remains unchanged. This can be helpful in tracking any misuse of this functionality. Also it can help in audit purposes.

Thursday, November 12, 2009

BW SYSTEM TUNING

In an end user perspective, performance is nothing but, is the next logical dialog screen appears on his/her GUI without any long delay. If there is a delay, it appears to be a bad performing system.
In a traditional way, performance tuning of an SAP application deals with buffer management, database tuning, work process tuning, fragmentation of the database, reorganization of the database, reducing I/O contention, operating system tuning, table stripping and the list goes on depending on the nature of the system.
This document deals with more of performance tuning in a BW perspective rather than general R/3 parameter tuning. Like, query performance, data load performance, aggregate tuning etc.
This document will focus the following key aspects in a detailed fashion.
1. What are the different ways to Tune an SAP system? ( General )
2. What are the general settings we need to adapt in a good performing BW system?
3. What are the factors which influence the performance on a BW system?
4. What are the factors to consider while extracting data from source system?
5. What are the factors to consider while loading the data?
6. How to tune your queries?
7. How to tune your aggregates?
8. What are the different options in Oracle for a good performing BW system?
9. What are the different tools available to tune a BW system? (With screenshots).
10. What are the best practices we can follow in a BW system?
1. What are the different ways to tune an SAP system?

Aim of tuning an SAP system should focus on one major aspect. Availability of the next logical screen to all users (end users/business users/super users) with equal or unequal (depending on the business requirement) allocation of technical resources in a timely manner. And also we need to keep in mind that we have spent just the optimal amount of money on the technical resources.
There are two major paths we need follow to tune an SAP system.

Tune it depending on the business requirement.

Tune it depending on the technical requirement. Business requirement.

Consider how many Lines of businesses we have in our company. Which Lines of business uses which IT infrastructure and how efficiently or inefficiently does that LOB uses the IT infrastructure? Who are all my critical users? Is it possible to assign a part of the technical resources just for them to use? How is the growth of my database? What are the key LOB's and who are the key users influencing the growth in the database? What is the data most frequently used? Is that data available always? Likewise, the list goes on... Understanding the business requirement and we can tune the system accordingly. Technical requirement:

How many CPU's? How many disks? Is there an additional server node required? How balanced is the load? How much is the network speed? Is table stripping required? What is the hit ratio? What is the I/O contention? Should we reorganize? What is the efficiency of the operating system? How is the performance of BEX? Likewise here also the list goes on...
By gauging, analyzing and balancing the two lists of technical requirements and business requirements we can end up in a good performing SAP system.
2. What are the general settings we need to adapt in a good performing BW system?

Following are the main parameters we need to monitor and maintain for a BW system. To start with performance tuning in a BW system, we have to focus on the following parameters. Rsdb/esm/buffersize_kb.
Rsdb/esm/max_objects.
Rtbb/max_tables.
Rtbb/buffer_length.
Rdisp/max_wprun_time
Gw/cpic_timeout
Gw/max_conn
Gw/max_overflow_size
Rdisp/max_comm_entries
Dbs/ora/array_buf_size
Icm/host_name_full
Icm/keep_alive_timeout
Depending on the size of the main memory, the program buffer should be between 200 and 400 MB. Unlike in R/3 Systems, a higher number of program buffer swaps is less important in BW Systems and is often unavoidable since the information stored in the program buffer is significantly less likely to be reused. While the response times of R/3 transactions is only around several hundred milliseconds, the response times of BW queries takes seconds. However, by tuning the program buffer, you can only improve the performance by milliseconds.

Therefore, if the available main memory is limited, you should increase the size of the extended memory. However, the program buffer should not be set lower than 200 MB. If the available main memory is sufficient, the program buffer in BW 2.X/3.X systems should be set to at least 300 MB.

BW users require significantly more extended memory than R/3 users. The size of the extended memory is related to the available main memory but should not be lower than 512 MB.

Set the Maximum work process runtime parameter to maximum and also set the timeout sessions to be high. Set the parameter dbs/ora/array_buf_size to a sufficiently large size to keep the number of array inserts, for example, when you upload data or during the rollup, as low as possible. This improves the performance during insert operations.
The main performance-related tables in the BW environment are:

* F-Fact tables: /BI0/F
* E-Fact tables: /BI0/E
* Dimension tables: /BI0/D
* SID tables: /BI0/S
* SID tables (navigation attribute, time-independent): /BI0/X
* SID tables (navigation attribute, time-dependent): /BI0/YIn addition to the /BI0 tables delivered by SAP, you also have customer-specific /BIC tables with a naming convention that is otherwise identical.
Since objects and partitions are frequently created and deleted in BW, and extents are thus allocated and reallocated, you should use Locally Managed Table spaces (LMTS) in the BW environment wherever possible.
Since numerous hashes, bitmap and sort operations are carried out in the BW environment especially; you must pay particular attention to the configuration of the PGA and PSAPTEMP table spaces. These components are crucial factors in the performance of processing the operations described. You must therefore ensure that PGA_AGGREGATE_TARGET is set to a reasonable size and that PSAPTEMP is in a high-speed disk area. It may be useful to add up to 40 % of the memories available for Oracle to the PGA.
If you work with large hierarchies, you have to increase the size of this buffer considerably. You should be able to store at least 5,000 objects in the buffer.
The BW basis parameters must be set optimally for the BW system to work without errors and the system to perform efficiently. The recommendations for BW systems are not always the same as those for R/3 systems.
3. What are the factors which influence the performance on a BW system?

There are three major factors that influence the performance of a BW system.

ü How we administer the BW system?

ü Technical resources available.

ü How the entire BW landscape is designed?

BW ADMINISTRATION

First step to resolve most of the problems in BW system is Archive. Archive the most amount of data you can.Archive data from Info Cubes and ODS objects and delete the archived data from the BW database. This reduces the data volume and, thus, improves upload and query performance.
An archiving plan can also affect the data model. For a yearly update, an Multiprovider partitioning per year

The archiving process in the BW system works slightly differently to that in an R/3 environment. In an R/3 system, the data is written to an archive file. Afterwards, this file is read and the deleted from the database, driven by the content of the file. In a BW system, the data from the archived file is not used in the deletion process (only verified to be accessible and complete). The values of the selection characteristics, which have been used for retrieving data in the 'Write' job, are passed to the selective deletion of the data target. This is the same functionality that is available within data target management in the Administrator Workbench ('Contents' tab strip). This functionality tries to apply an optimal deletion strategy, depending on the values selected, that is, it drops a partition when possible or copies and renames the data target when more than a certain percentage of the data has to be deleted.
Reloading archived data should be an exception rather than the general case, since data should be archived only if it is not needed in the database anymore. When the archived data target is serving also as a data mart to populate other data targets, we recommend that you load the data to a copy of the original (archived) data target, and combine the two resulting data targets with a MultiProvider.
In order to reload the data to a data target, you have to use the export Data Source of the archived data target. You then trigger the upload either by using 'Update ODS data in data target' or by replicating the Data Sources of the MYSELF source system and subsequently scheduling an Info Package for the respective Info Source. In this scenario we have used the first option. Load balancing:
Load balancing provides the capability to distribute processing across several servers in order to optimally utilize the server resources that are available. An effective load balancing strategy can help you to avoid inefficient situations where one server is overloaded (and thus performance suffers on that server), while other servers go underutilized. The following processes can be balanced:
? Logon load balancing (via group login): This allows you to distribute the workload of multiple query/administration users across several application servers.
? Distribution of web users across application servers can be configured in the BEx service in SICF.
And also, Process chains, Data loads and data extractions should be routed to perform in specific target servers.
In some cases, it is useful to restrict the extraction or data load to a specific server (in SBIW in an SAP source system, or SPRO in BW), i.e. not using load balancing. This can be used for special cases where a certain server has fast CPUs and therefore you may want to designate it as an extraction or data load server.
Reorganize the table:
Logs of several processes are collected in the application log tables. These tables tend to grow very big as they are not automatically deleted by the system and can impact the overall system performance.
Table EDI40 can also grow very big depending on the number of IDOC records.
Depending on the growth rate (i.e., number of processes running in the system), either schedule the reorganization process (transaction SLG2) regularly or delete log data as soon as you notice significant DB time spent in table BALDAT (e.g., in SQL trace).


Delete regularly old RSDDSTAT entries.If several traces and logs run in the background, this can lead to bad overall performance and sometimes it's difficult to discover all active logs. So be sure to switch off traces and logs as soon as they are not used any more.
Technical resources available:
The capacity of the hardware resources represents highly significant aspect of the overall performance of the BW system in general. Insufficient resources in any one area can constraint performance capabilities
These include:
? Number of CPUs
? Speed of CPUs
? Memory
? I/O-Controller
? Disk architecture (e.g. RAID)
A BW environment can contain a DB server and several application servers. These servers can be configured individually (e.g. number of dialog and batch processes), so that the execution of the different job types (such as queries, loading, DB processes) can be optimized. The general guideline here is to avoid hot spots and bottlenecks.
For optimizing the hardware resources, it is recommended to define at least two operation modes: one for batch processing (if there is a dedicated batch window) with several batch processes and one for the query processing with several dialog processes.
Different application servers have separate buffers and caches. E.g. the OLAP cache (BW 3.x) on one application server does not use the OLAP cache on other servers.
BW landscape design:
Info Cube modeling is the process by which business reporting requirements are structured into an object with the facts and characteristics that will meet the reporting needs.
Characteristics are structured together in related branches called dimensions.
The key figures form the facts.
The configuration of dimension tables in relation to the fact table is denoted as "star schema".
For a BW system to perform better we should not combine dynamic characteristics in the same dimension in order to keep dimensions rather small. Example: Don't combine customer and material in one dimension if the two characteristics are completely independent. As a general rule, it makes more sense to have many smaller dimensions vs. fewer larger dimensions. Dimension tables should be sized less than 10% of the fact table.
Use MultiProvider (or logical) partitioning to reduce the sizes of the Info Cubes.
Example: Define Info Cubes for one year and join them via a MultiProvider so we can have parallel access to underlying basis Info Cubes, load balancing, and resource utilization.
Define large dimensions as line item dimensions (e.g. document number or customer number) if (as a rule of thumb) the dimension table size exceeds 10 % of the fact table(s) size; B-tree is generally preferable for cases where there is high cardinality (high number of distinct values)
Info Cubes containing non-cumulative key figures should not be too granular. A high granularity will result in a huge amount of reference points which will impact aggregate build significantly. Reference points can only be deleted by deleting an object key not specifying the time period, i.e. all available records for this key are deleted.
The data model has tremendous impact on both query AND load performance. E.g. bad dimension model. Example: Customer and material in one dimension instead of separate dimensions can lead to huge dimension tables and thus slows down query performance, as it is expensive to join a huge dimension table to a huge fact table. Transaction RSRV can be used to check the fact to dimension table ratio.
As non-cumulative key figures are well defined for every possible point in time (according to the calculation algorithm), it could make sense to restrict the validity to a certain time period. Example: If a plant is closed, it should not show up any stock figures. These objects can be defined as validity objects. Note that for every entry in the validity table, a separate query is generated at query runtime.
4. What are the factors to consider while extracting data from source system?

Data load performance can be affected by following key aspects.
Customer exits. à Check with RSA3, SE30 and ST05

Resource utilization. à SM50 / SM51

Load balancing. à SM50 / SM51 (Configure ROIDOCPRMS)

Data package size.

Indices on tables. à ST05

Flat file format.

Content Vs generic extractor. The size of the packages depends on the application, the contents and structure of the documents. During data extraction, a dataset is collected in an array (internal table) in memory. The package size setting determines how large this internal table will grow before a data package is sent. Thus, it also defines the number of Commit's on DB level.
Use RSMO and RSA3 to monitor the load.

Indices can be built on Data Source tables to speed up the selection process.
If there is a poor performance in data load, refer the following note
Note 417307 - Extractor package size: Collective note for applications.
If you define selection criteria in your Info Package and the selection of the data is very slow, consider building indices on the Data Source tables in the source system.
5. What are the factors to consider while loading the data?

There are two major aspects to consider while loading data.
I/O contention.

O/S Monitors. I/O contention.
High number of DB writes during large data loads.

Disk Layout and Striping. (What is located on the same disk or table space/DB space etc.?)At the time of data load we need to also check the transformation rules. à Use SE30 and ST05.The master data load creates all SIDs and populates the master data tables (attributes and/or texts). If the SIDs does not exist when transaction data is loaded, these tables have to be populated during the transaction data load, which slows down the overall process.

Another major function which could be performed at data load is buffering number ranges.SID number range can be buffered instead of accessing the DB for each SID.
If you encounter massive accesses to DB table NRIV via SQL trace (ST05), increase the number range buffer in transaction SNRO.

Always load master data before transaction data. The transaction data load will be improved, as all master data SIDs are created prior to the transaction data load, thus precluding the system from creating the SIDs at the time of load.
In transaction RSCUSTV6 the size of each PSA partition can be defined. This size defines the number of records that must be exceeded to create a new PSA partition. One request is contained in one partition, even if its size exceeds the user-defined PSA size; several packages can be stored within one partition.
The PSA is partitioned to enable fast deletion (DDL statement DROP PARTITION). Packages are not deleted physically until all packages in the same partition can be deleted.

Transformation rules are transfer rules and update rules. Start routines enable you to manipulate whole data packages (database array operations) instead of changing record-by-record. In general it is preferable to apply transformations as early as possible in order to reuse the data for several targets.

Flat files: Flat files can be uploaded either in CSV format or in fixed-length ASCII format. If you choose CSV format, the records are internally converted in fixed-length format, which generates overhead.
You can upload files either from the client or from the application server. Uploading files from the client workstation implies sending the file to the application server via the network - the speed of the server backbone will determine the level of performance impact, Gigabit backplanes make this a negligible impact.
The size (i.e., number of records) of the packages, the frequency of status IDocs can be defined in table RSADMINC (Transaction RSCUSTV6) for the flat file upload. If you load a large amount of flat file data, it is preferable to use fixed-length ASCII format, to store the files on the application server rather than on the client and to set the parameters according the recommendations in the referenced note.

If possible, split the files to achieve parallel upload. We recommend as many equally-sized files as CPUs are available.
6 / 7. How to tune your queries and aggregates?
The data in a Data Warehouse is largely very detailed. In SAP BW, the Info Cube is the primary unit of storage for data for reporting purposes. The results obtained by executing a report or query represent a summarized dataset.
An aggregate is a materialized, summarized view of the data in an Info Cube. It stores a subset of Info Cube data in a redundant form. When an appropriate aggregate for a query exists, summarized data can be read directly from the database during query execution, instead of having to perform this summarization during runtime. Aggregates reduce the volume of data to be read from the database, speed up query execution time, and reduce the overall load on the database.
A sound data model in BW should comprise of the following
Dimensional modeling.
Logical partitioning.
Physical partitioning.
The main purpose of aggregate is to accelerate the response time of the queries, by reducing the amount of data that must be read in the database for a navigation step. Grouping and filtering will enhance the value of an aggregate.
We can group according to the characteristic or attribute value, according to the nodes of the hierarchy level, and also filter according to a fixed value.
It is guaranteed that queries always deliver consistent data when you drilldown. This means that data provided when querying against an aggregate is always from the same set of data that is visible within an Info Cube.
Rollup
New data packets / requests that are loaded into the InfoCube cannot be used at first for reporting if there are aggregates that are already filled. The new packets must first be written to the aggregates by a so-called "roll-up". Data that has been recently loaded into an InfoCube is not visible for reporting, from the InfoCube or aggregates, until an aggregate roll-up takes place. During this process you can continue to report using the data that existed prior to the recent data load. The new data is only displayed by queries that are executed after a successful roll-up. See the attachment for more details on the technical process of a roll-up.
The split of a query is rule-based.
Parts of the query on different aggregation level are split.
Parts with different selections on characteristic are combined.
Parts on different hierarchy levels or parts using different hierarchies are split.
After the split, OLAP processor searches for an optimal aggregate each part. Parts which use the same aggregate will be combined again (in some cases it is not possible to combine them)
Maintaining an aggregate: RSDDV.

After selecting a particular info cube, we could drill down to the options of the aggregate to tune each of them.


This is the same screen for BI Accelerator index.
RSDDBIAMON:
This is another important T code where we could perform the following actions. Possible actions
Restart host: restarts the BI accelerator hardware
Restart BIA server: restarts all the BI accelerator servers and services. This includes the name server and index server
Restart BIA index server: restarts the index server. (The name servers are not restarted.) Rebuild BIA indexes: If a check discovers inconsistencies in the indexes, you can use this action to delete and rebuild all the BI accelerator indexes.
Reorganize BIA landscape: If the BI accelerator server landscape is unevenly distributed, this action redistributes the loaded indexes on the BI accelerator servers
Checks
Connection Check
Index Check


In our system BIA monitor is not set up. So, we need to set up this. Here am not going to set up this, because it might affect few other RFC destinations.
Query design: Multi-dimensional Query.
Inclusion / Exclusion.
Multi provider query.
Cell calculation
Customer exits.
Query read mode.
Every Query should start with a relatively small result set; let the user drill down to more detailed information.
Do not use ODS objects for multi-dimensional reporting.
Queries on Multi Providers usually access all underlying Info Providers, even if some cannot be hit as no key figures within the query definition are contained in this Info Provider.
In ORACLE, fact tables can be indexed either by bitmap indices or by B-tree indices. A bitmap index stores a bitmap stream for every characteristic value. Bitmap indices are suitable for characteristics with few values. Binary operations (AND or OR) are very fast.
B-tree indices are stored in a (balanced) tree structured. If the system searches one entry, it starts at the root and follows a path down to the leaf where the row ID is linked. B-tree indices suit for characteristics with lots of values.
In some cases, ORACLE indices can degenerate. Degeneration is similar to fragmentation, and reduces the performance efficiency of the indexes. This happens when records are frequently added and deleted.
The OLAP Cache can help with most query performance issues. For frequently used queries, the first access fills the OLAP Cache and all subsequent calls will hit the OLAP Cache and do not have to read the database tables. In addition to this pure caching functionality, the Cache can also be used to optimize specific queries and drill-down paths by 'warming up' the Cache; with this you fill the Cache in batch to improve all accesses to this query data substantially.
8. What are the different options in Oracle for a good performing BW system?
I/O hotspots:
The purpose of disk layout is to avoid I/O hot spots by distributing the data accesses across several physical disks. The goal is to optimize the overall throughput to the disks.

The basic rule is: stripe over everything, including RAID-subsystems.

Managing table spaces:Locally-Managed Table spaces manage their own extents by maintaining bitmaps in each data file. The bitmaps correspond to (groups of) blocks.
Be sure that all (bigger) table spaces are locally managed. Extent and partition maintenance is drastically improved, as DB dictionary accesses are minimized. Administration maintenance is also reduced.
Parallel query option:
ORACLE can read database table contents in parallel if this setting is active. BW uses this feature especially for staging processes and aggregate build. The Parallel Query Option is used by default. Be sure, that the init.ora-entries for PARALLEL_MAX_SERVERS are set appropriate to the recommendations in the ORACLE note.
Table partitioning:
Table partitions are physically separated tables, but logically they are linked to one table name. PSA tables and non-compressed F-fact table are partitioned by the system (by request ID). The (compressed) E-fact table can be partitioned by the user by certain time characteristics. For range-partitioned InfoCubes, the SID of the chosen time characteristic is added to both fact tables.
When using range partitioning, query response time is generally improved by partition pruning on the E fact table: all irrelevant partitions are discarded and the data volume to be read is reduced by the time restriction of the query.
In ORACLE, report SAP_DROP_EMPTY_FPARTITIONS can help you to remove unused or empty partitions of InfoCube or aggregate fact tables. Unused or empty partitions can emerge in case of selective deletion or aborted compression and may affect query performance as all F fact table partitions are accessed for queries on the InfoCube.
9. What are the different tools available to tune a BW system?

RSMO is used to monitor data flow to target system from source system. We can see data by request, source system, time request id etc. It provides all necessary information on times spent in different processes during the load (e.g., extraction time, transfer, posting to PSA, processing transformation rules, writing to fact tables). In the upload monitor you are also able to debug transfer and update rules.
If the extraction from an SAP source system consumes significant time, use the extractor checker (transaction RSA3) in the source system.
If the data transfer times are too high, check if too many work processes are busy (if so, avoid large data loads with "Update Data Targets in Parallel" method), and check swapping on one application server (set "rdisp/bufrefmode = "sendoff, exeauto" during load phase if you use several application servers).
RSRT

The Query Monitor (transaction RSRT) allows you to execute queries and to trace queries in a debug mode with several parameters (e.g., do not use aggregates, do not use buffer, show SQL statement).
In the debug mode, you can investigate if the correct aggregate(s) are used and which statistics the query execution generates. For checking reasons, you can switch off the usage of aggregates, switch to no parallel processing (see for more details in the MultiProvider section) or display the SQL statement and the run schedule.

Select a particular query and then click on performance info.

Like this query we can generate detailed performance info for every query. Below is the screen shot containing the detailed information for this query.

Query tracing:
RSRTRACE. The Query Trace Tool (transaction RSRTRACE) allows you to record some important function module calls and process and debug them at a later stage. Transaction RSRCATTTRACE takes the log of RSRTRACE as input and gives aggregates suggestions for the first execution AND all further navigations performed.

RSRV:BW objects can be checked for consistency in transaction RSRV and inconsistent objects can be repaired.

Apart from these BW tools, we have standard ABAP based tools like ST05, ST03n, SE30, SM50 and SM51 to check and measure the performance of the system.
In SE 30, we have special options like if cases, field conversions and monitoring the SQL interface.

ST05: The SQL trace (transaction ST05) records all activities on the database and enables you to check long runtimes on a DB table or several similar accesses to the same data.
If we find problems for an isolated process (upload or query) and we have analyzed for example the existence of aggregates, we can detail our analyses by using the SQL trace. Filter on a specific user (e.g. query user or extraction user ALEREMOTE) and make sure that no concurrent jobs run at the same time with this execution. We will find out which tables are accessed, what time is consumed and if some tables are accessed redundantly.

Another important tool to be used is ST10.
Here we can find out the statistics of the table and get more detailed info on a particular table. If we assume a general buffer problem, check ST10 and check the buffer settings of all tables; compare usage of buffer vs. invalidations.
ST04, DB02, SM50, SM51, ST02, ST06 are some the important tools which we normally use in R/3. These transaction codes should be extensively used here as well for gauging and optimizing the performance of the system. 10.What are the best practices we can follow in a BW system?

Best practices for a production BW system can be drafted only with a close interaction with the functional team and technical team and the nature of the production system.

Here, are couple of best practices we could implement to improve the performance.
Activate Transfer rule for info source:When you have maintained the transfer structure and the communication structure, you can use the transfer rules to determine how the transfer structure fields are to be assigned to the communication structure InfoObjects. You can arrange for a 1:1 assignment. But you can also fill InfoObjects using routines or constants.
Use scheduler:
The scheduler is the connecting link between the source systems and the SAP Business Information Warehouse. Using the scheduler you can determine when and from which InfoSource, DataSource, and source system, data (transaction data, master data, texts or hierarchies) is requested and updated.
The principle behind the scheduler relates to the functions of SAP background jobs. The data request can be scheduled either straight away or it can be scheduled with a background job and started automatically at a later point in time. We get to the data request via the Scheduler in the Administration Workbench Modeling, by choosing InfoSource Tree ® Your Application Component ® InfoSources ® Source System ® Context Menu ® Create InfoPackage

Assign several info sources: Assign several DataSources to one InfoSource, if you want to gather data from different sources into a single InfoSource. This is used, for example, if data from different IBUs that logically belongs together is grouped together in BW.
The fields for a DataSource are assigned to InfoObjects in BW. This assignment takes place in the same way in the transfer rules maintenance.