Wednesday, December 23, 2009

Number of records inserted in infocube

Table : RSMONICDP

To get the records inserted / updated in InfoCube by per request .

More tables of this kind :

RSSELDONE
RSSELDTP

RSSELMON

RSREQDONE
RSICCONT

To check bw objects to be repair in SAP BI

Goto TCode : SE38
TCode :SE38
Enter the Program : RSAR_RSISOSMAP_REPAIR

Click on execute button or F8.
here you will get the Check box Repair mode, check the check box and click on execute.

Monday, December 21, 2009

How to Remove Leading Zeros in Transformations

This can be done in many ways...
1) This can be very easily handled at the Info Object level by selecting 'ALPHA' conversion routine.

2) You can also tick the "Convesrion" option in Trasnfer Rules. This will perform same as ALPHA Converstion.

3) Some times you need to write an ABAP routine to remove leading zeros in transformations.

Here is the sample code to remove leading zeros for 0ALLOC_NMBR field

DATA: V_OUTPUT TYPE TRANSFER_STRUCTURE-ALLOC_NMBR.

CALL FUNCTION 'CONVERSION_EXIT_ALPHA_OUTPUT'
EXPORTING
INPUT = TRAN_STRUCTURE-ALLOC_NMBR
IMPORTING
OUTPUT = V_OUTPUT

RESULT = V_OUTPUT.


4) Another example ABAP Routine.

IF TRAN_STRUCTURE-ALLOC_NMBR+0(4) = '0000'.
RESULT = TRAN_STRUCTURE-ALLOC_NMBR+4(6).
ELSE.
RESULT = TRAN_STRUCTURE-ALLOC_NMBR.
ENDIF.

Saturday, December 19, 2009

BW Questions-V01

1. What is the difference between OLTP and OLAP?
OLTP Current data Short database transactions Online update/insert/delete Normalization is promoted High volume transactions Transaction recovery is necessary

OLAP Current and historical data Long database transactions Batch update/insert/delete Renormalization is promoted Low volume transactions Transaction recovery is not necessary

OLTP is nothing but OnLine Transaction Processing ,which contains a normalised tables and online data,which have frequent insert/updates/delete.
But OLAP(Online Analtical Programming) contains the history of OLTP data, which is, non-volatile ,acts as a Decisions Support System and is used for creating forecasting reports.



2. What is different type of multidimensional models?

3. What is the dimension?.

A grouping of those evaluation groups (characteristics) that belong together under a common superordinate term.
With the definition of an InfoCube, characteristics are grouped together into dimensions in order to store them in a star schema table (dimension table).


4. What is the fact and fact table?
Table in the center of an InfoCube star schema.
The data part contains all key figures of the InfoCube and the key is formed by links to the entries of the dimensions of the InfoCube.


5. Difference between key performance indicators (KPI) and key figures?.

Key Performance Indicators are quantifiable measurements, agreed to beforehand, that reflect the critical success factors of an organization. They will differ depending on the organization. A business may have as one of its Key Performance Indicators the percentage of its income that comes from return customers. A school may focus its Key Performance Indicators on graduation rates of its students. A Customer Service Department may have as one of its Key Performance Indicators, in line with overall company KPIs, percentage of customer calls answered in the first minute. A Key Performance Indicators for a social service organization might be number of clients assisted during the year.
Whatever Key Performance Indicators are selected, they must reflect the organization's goals, they must be key to its success,and they must be quantifiable (measurable). Key Performance Indicators usually are long-term considerations. The definition of what they are and how they are measured do not change often. The goals for a particular Key Performance Indicator may change as the organizations goals change, or as it get closer to achieving a goal.
. 6. Difference between star schema and Extended star schema?

The difference between star and extended star schemas:1) Master data is not reusable in STAR because it is inside a cube. i.e in star scheme Dimensional tables and Master data tables are same. these 2 are inside a cube. But in extended star shema master data tables are outside the cube. so these are reusable components. here master data tables and Dimensional tables are different.2) limited Analysis: in star schema the maximum number of master data tables are 16. But in extended star schema the maximum number of Dimensioanl tables are 16. we can assign maximum of 233 char to one dimensional table. in that way we can assign 233*16 Char.3) low performance: in star schema it is used ALPHA numeric data. in Extended star schema we are using numering data. like we are generating SID inorder to link with dimesional tables these are numeric data. so performance is low regarding star schema.

7.what is a dimension table in extended star schema and when it is exactly created and when it will get populated?.

8. What is a SID (Surrogate ID) table in extended star schema and when it is exactly created and when it will get populated?.

9. what are the limitation techniques of infocube or modeling in BW?.

10. what is Flexiable update and Direct Update and explain difference?.
When we load data to the data target at the level of info provider level then we go for flexible update (wheither it is M.D or T.D). When we load data to the data target at the level of Info Object we go for Direct update.

Main Deference is Direct updata = without Update rulesFlexible update= With update rules
Scenarios for Flexible Updating
1. Attributes and texts are delivered together in a file:
Your master data, attributes, and texts are available together in a flat file. They are updated by an InfoSource with flexible updating in additional InfoObjects. In doing so, texts and attributes can be separated from each other in the communication structure.
Flexible updating is not necessary if:
· texts and attributes are available in separate files/DataSources. In this case, you can choose direct updating if additional transformations using update rules are not necessary.
2. Attributes and texts come from several DataSources:
This scenario is similar to the one described above, only slightly more complex. Your master data comes from two different source systems and delivers attributes and texts in flat files. They are grouped together in an InfoSource with flexible updating. Attributes and texts can be separated in the communication structure and are updated further in InfoObjects. The texts or attributes from both source systems are located in these InfoObjects.
3. Master data in the ODS layer:
A master data InfoSource is updated to a master data ODS object business partner with flexible updating. The data can now be cleaned and consolidated in the ODS object before being re-read. This is important when the master data frequently changes.
These cleaned objects can now be updated to further ODS Objects. The data can also be selectively updated using routines in the update rules. This enables you to get views of selected areas. The data for the business partner is divided into customer and vendor here.
Instead you can update the data from the ODS object in InfoObjects as well (with attributes or texts). When doing this, be aware that loading of deltas takes place serially. You can ensure this when you activate the automatic updates in ODS object maintenance or when you perform the loading process using a process chain (see also Including ODS Objects in a Process Chain).
A master data ODS object generally makes the following options available:
· It displays an additional level on which master data from the whole enterprise can be consolidated.
· ODS objects can be used as a validation table for checking the referential integrity of characteristic valuables in the update rules.
· It can serve as a central repository for master data, in which master data is consolidated from various systems. They can then be forwarded to further BW systems using the Data Mart.


Direct update is generally used for Master data infoobject & Hoerarchies . Here no update rules are used, that means data from source system passes though transfer structure, rules, & communication structure directly to Data target i.e. InfoObject.

11. what are transfer rules and updates rules and difference?.
Why we are using update rule while loading the data from source syst?Why can not we directly load data from transfer rule to datatarget..?Update rules are after the infosource and before the data target. Transactional Data can not be loaded into the data target without passing through the update rules.Incase of master data Update rules are not required.Let us take one example. Say you have Customer quantity price revenue and date.you have the data cust, Qty, Prc and date extracted from Source system. Assume that you can not Extract Rev.In transfer rules you can apply some rules on Qty and Prc and Rev can be derived.Suppose if requirement is period also should be presented in the report.Then in the update rules by setting date in the time Ref char, sytem will give the period, week, month ets like values.Like wise depending upon the requirements you can use the update rules.These rules to be applied to fill the data target.As per the these rules data sits in the respective object locations.
The reason for having update rules would be:1. If a business logic lets say if a certain quantity > '5' - then rating is "A" needs to be implemented you would have to do it in all the transfer rules whereas in a update rule only once.2. You can use return tables in update rules which would split the incoming data package record into multiple ones. This is not possible in transfer rules. 3. Currency conversion is not possible in transfer rules.4. If you have a key figure that is a calculated one using the base key figures you would do the calculation only in the update rules.
WHAT ARE THE DIFFERENT TYPES OF TRANSFER RULES
4 types:1) InfoObject: Direct mapping2) Constants: A fixed value. 3) Formula: value is determined using a formula.4) Routine: ABAP programs

12. what is the update mode , update method update type for updating data into infocube?.

13. what is the PSA and advantages and disadvantages of PSA ?.

14. what are the fields of PSA?.

15.how many data sources can be assigned to in infosource?.

16. what are different transformation methods in transfer rules.?.

15. What is ER Diagram

The Entity-Relationship (ER) model was originally proposed by Peter in 1976 [Chen76] as a way to unify the network and relational database views. Simply stated the ER model is a conceptual data model that views the real world as entities and relationships. A basic component of the model is the Entity-Relationship diagram which is used to visually represents data objects. Since Chen wrote his paper the model has been extended and today it is commonly used for database design For the database designer, the utility of the ER model is: it maps well to the relational model. The constructs used in the ER model can easily be transformed into relational tables. it is simple and easy to understand with a minimum of training. Therefore, the model can be used by the database designer to communicate the design to the end user. In addition, the model can be used as a design plan by the database developer to implement a data model in a specific database management software.

why an infocube has maximum of 16 dimensions?
As the total charecterstics are 255, out of which 16 charecters are being allowed for foreign keys and 6 charecters are for sap default, for this sake we have only 16 dimention tables, of which 3 are again sap default (unit, time. Datapacket): so finally we have only 13 user dimentions.

Monday, December 14, 2009

Interview Questions:

1. Identify the statement(s) that is/are true. A change run...

a. Activates the new Master data and Hierarchy data
b. Aggregates are realigned and recalculated
c. Always reads data from the InfoCube to realign aggregates
d. Aggregates are not affected by change run

1: A, B

2. Which statement(s) is/are true about Multiproviders?

a. This is a virtual Infoprovider that does not store data
b. They can contain InfoCubes, ODSs, info objects and info sets
c. More than one info provider is required to build a Multiprovider
d It is similar to joining the data tables

2: A, B

3. The structure of the PSA table created for an info source will be...

a. Featuring the exact same structure as Transfer structure
b. Similar to the transfer rules
c. Similarly structured as the Communication structure
d. The same as Transfer structure, plus four more fields in the beginning

3: D

4. In BW, special characters are not permitted unless it has been defined using this transaction:

a. rrmx
b. rskc
c. rsa15
d. rrbs

4: B

5. Select the true statement(s) about info sources:

a. One info source can have more than one source system assigned to it
b. One info source can have more than one data source assigned to it provided the data sources are in different source systems
c. Communication structure is a part of an info source
d. None of the above

5: A, C

6. Select the statement(s) that is/are true about the data sources in a BW system:

a. If the hide field indicator is set in a data source, this field will not be transferred to BW even after replicating the data source
b. A field in a data source won't be usable unless the selection field indicator has been set in the data source
c. A field in an info package will not be visible for filtering unless the selection field has been checked in the data source
d. All of the above

6: A, C

7. Select the statement(s) which is/are true about the 'Control parameters for data transfer from the Source System':

a. The table used to store the control parameters is ROIDOCPRMS
b. Field max lines is the maximum number of records in a packet
c. Max Size is the maximum number of records that can be transferred to BW
d. All of the above

7: A

8. The indicator 'Do not condense requests into one request when activation takes place' during ODS activation applies to condensation of multiple requests into one request to store it in the active table of the ODS.

a. True
b. False

8: B

9. Select the statement(s) which is/are not true related to flat file uploads:

a. CSV and ASCII files can be uploaded
b. The table used to store the flat file load parameters is RSADMINC
c. The transaction for setting parameters for flat file upload is RSCUSTV7
d. None of the above

9: C

10. Which statement(s) is/are true related to Navigational attributes vs Dimensional attributes?

a. Dimensional attributes have a performance advantage over Navigational attributes for queries
b. Change history will be available if an attribute is defined as navigational
c. History of changes is available if an attribute is included as a characteristic in the cube
d. All of the above

10: A, C

11. When a Dimension is created as a line item dimension in a cube, Dimensions IDs will be same as that of SIDs.

a. True
b. False

11: A

12. Select the true statement(s) related to the start routine in the update rules:

a. All records in the data packet can be accessed
b. Variables declared in the global area is available for individual routines
c. Returncode greater than 0 will be abort the whole packet
d. None of the above

12: A, B, C

13. If a characteristic value has been entered in InfoCube-specific properties of an InfoCube, only these values can be loaded to the cube for that characteristic.

a. True
b. False

13: A

14. After any changes have been done to an info set it needs to be adjusted using transaction RSISET.

a. True
b. False

14: A

15. Select the true statement(s) about read modes in BW:

a. Read mode determines how the OLAP processor retrieves data during query execution and navigation
b. Three different types of read modes are available
c. Can be set only at individual query level
d. None of the above

15: A, B

BW Interview Questions

1) Please describe your experience with BEx (Business Explorer)
A) Rate your level of experience with BEx and the rationale for you’re self-rating

B) How many queries have you developed? :

C) How many reports have you written?

D) How many workbooks have you developed?

E) Experience with jump targets (OLTP, use jump target)

F) Describe experience with BW-compatible ETL tools (e.g. Ascential)

2) Describe your experience with 3rd party report tools (Crystal Decisions, Business Objects a plus)

3) Describe your experience with the design and implementation of standard & custom InfoCubes.

1. How many InfoCubes have you implemented from start to end by yourself (not with a team)?

2. Of these Cubes, how many characteristics (including attributes) did the largest one have.

3. How much customization was done on the InfoCubes have you implemented?

4) Describe your experience with requirements definition/gathering.

5) What experience have you had creating Functional and Technical specifications?

6) Describe any testing experience you have:

7) Describe your experience with BW extractors

1. How many standard BW extractors have you implemented?

2. How many custom BW extractors have you implemented?

8) Describe how you have used Excel as a compliment to BEx

A) Describe your level of expertise and the rationale for your self-rating (experience with macros, pivot tables and formatting)
B)

9) Describe experience with ABAP

10) Describe any hands on experience with ASAP Methodology.

11) Identify SAP functional areas (SEM, CRM, etc.) you have experience in. Describe that experience.

12) What is partitioning and what are the benefits of partitioning in an InfoCube?

A) Partitioning is the method of dividing a table (either column wise or row wise) based on the fields available which would enable a quick reference for the intended values of the fields in the table. By partitioning an infocube, the reporting performance is enhanced because it is easier to search in smaller tables. Also table maintenance becomes easier.

13) What does Rollup do?

A) Rollup creates aggregates in an infocube whenever new data is loaded.

14) What are the inputs for an infoset?

A) The inputs for an infoset are ODS objects and InfoObjects (with master data or text).

15) What internally happens when BW objects like Info Object, Info Cube or ODS are created and activated?

A) When an InfoObject, InfoCube or ODS object is created, BW maintains a saved version of that object but does not make it available for use. Once the object is activated, BW creates an active version that is available for use.

16) What is the maximum number of key fields that you can have in an ODS object?

A) 16.

17) What is the specific advantage of LO extraction over LIS extraction?

A) The load performance of LO extraction is better than that of LIS. In LIS two tables are used for delta management that is cumbersome. In LO only one delta queue is used for delta management.

18) What is the importance of 0REQUID?

A) It is the InfoObject for Request id. OREQUID enables BW to distinguish between different data records.

19) Can you add programs in the scheduler?

A) Yes. Through event handling.

20) What is the importance of the table ROIDOCPRMS?

A) It is an IDOC parameter source system. This table contains the details of the data transfer like the source system of the data, data packet size, maximum number of lines in a data packet, etc. The data packet size can be changed through the control parameters option on SBIW i.e., the contents of this table can be changed.

21) What is the importance of 'start routine' in update rules?

A) A Start routine is a user exit that can be executed before the update rule starts to allow more complex computations for a key figure or a characteristic. The start routine has no return value. Its purpose is to execute preliminary calculations and to store them in a global data structure. You can access this structure or table in the other routines.
22) When is IDOC data transfer used?

A) IDOCs are used for communication between logical systems like SAP R/3, R/2 and non-SAP systems using ALE and for communication between an SAP R/3 system and a non-SAP system. In BW, an IDOC is a data container for data exchange between SAP systems or between SAP systems and external systems based on an EDI interface. IDOCs support limited file size of 1000 bytes. So IDOCs are not used when loading data into PSA since data there is more detailed. It is used when the file size is lesser than 1000 bytes.

23) What is partitioning characteristic in CO-PA used for?

A) For easier parallel search and load of data.

24) What is the advantage of BW reporting on CO-PA data compared with directly running the queries on CO-PA?

A) BW has a better performance advantage over reporting in R/3. For a huge amount of data, the R/3 reporting tool is at a serious disadvantage because R/3 is modeled as an OLTP system and is good for transaction processing rather than analytical processing.

25) What is the function of BW statistics cube?

A) BW statistics cube contains the data related to the reporting performance and the data loads of all the InfoCubes in the BW system.

26) When an ODS is in 'overwrite' mode, does uploading the same data again and again create new entries in the change log each time data is uploaded?
A) No.

27) What is the function of 'selective deletion' tab in the manage->contents of an infocube?

A) It allows us to select a particular value of a particular field and delete its contents.

28) When we collapse an infocube, is the consolidated data stored in the same infocubeinfocube? or is it stored in the new

A) Data is stored in the same cube.

29) What is the effect of aggregation on the performance? Are there any negative effects on the performance?

A) Aggregation improves the performance in reporting.

30) What happens when you load transaction data without loading master data?

A) The transaction data gets loaded and the master data fields remain blank.

31) When given a choice between a single infocube and multiple InfoCubes with a multiprovider, what factors does one need to consider before making a decision?

A) One would have to see if the InfoCubes are used individually. If these cubes are often used individually, then it is better to go for a multiprovider with many cubes since the reporting would be faster for an individual cube query rather than for a big cube with lot of data.

32) How many hierarchy levels can be created for a characteristic info object?

A) Maximum of 98 levels.

33) What is open hub service?

A) The open hub service enables you to distribute data from an SAP BW system into external data marts, analytical applications, and other applications. With this, you can ensure controlled distribution using several systems. The central object for the export of data is the Infospoke. Using this, you can define the object from which the data comes and into which target it is transferred. Through the open hub service, SAP BW becomes a hub of an enterprise data warehouse. The distribution of data becomes clear through central monitoring from the distribution status in the BW system.

34) What is the function of 'reconstruction' tab in an infocube?

A) It reconstructs the deleted requests from the infocube. If a request has been deleted and later someone wants the data records of that request to be added to the infocube, one can use the reconstruction tab to add those records. It goes to the PSA and brings the data to the infocube.

35) What are secondary indexes with respect to InfoCubes?

A) Index created in addition to the primary index of the infocube. When you activate a table in the ABAP Dictionary, an index is created on the primary key fields of the table. Further indexes created for the table are called secondary indexes.

36) What is DB connect and where is it used?

A) DB connect is database connecting piece of program. It is used in connecting third party tools with BW for reporting purpose.

37) Can we extract hierarchies from R/3 for CO-PA?

A) No We cannot, “NO hierarchies in CO/PA�?.

38) Explain ‘field name for partitioning’ in CO-PA

A) The CO/PA partitioning is used to decrease package size (eg: company code)

39) What is V3 update method ?

A) It is a program in R/3 source system that schedules batch jobs to update extract structure to data source collectively.

40) Differences between serialized and non-serialized V3 updates

41) What is the common method of finding the tables used in any R/3 extraction

A) By using the transaction LISTSCHEMA we can navigate the tables.

42) Differences between table view and infoset query

A) An InfoSet Query is a query using flat tables.

43) How to load data from one InfoCube to another InfoCube ?

A) Thro DataMarts data can be loaded from one InfoCube to another InfoCube.

44) What is the significance of setup tables in LO extractions ?
A) It adds the Selection Criteria to the LO extraction.

45) Difference between extract structure and datasource

A) In Datasource we define the data from diff source sys,where as in extract struct it contains the replicated data of datasource n where in we can define extract rules, n transfer rules
B) Extract Structure is a record layout of InfoObjects.
C) Extract Structure is created on SAP BW system.

46) What happens internally when Delta is Initialized

47) What is referential integrity mechanism ?

A) Referential integrity is the property that guarantees that values from one column depend on values from another column.This property is enforced through integrity constraints.
48) What is activation of extract structure in LO ?

49) What is the difference between Info IDoc and data IDoc ?

50) What is D-Management in LO ?
A) It is a method used in delta update methods, which is based on change log in LO.

Variable Types

Types of Variables:

The type of variable being used. This determines the object that the variable represents as a placeholder for a concrete value.
Structure

There are different types of variables depending on the object for which you want to define variables. These types specify where you can use the variables.

· Characteristic Value Variables

Characteristic value variables represent characteristic values and can be used wherever characteristic values are used.

If you restrict characteristics to specific characteristic values, you can also use characteristic value variables.

· Hierarchy Variables

Hierarchy variables represent hierarchies and can be used wherever hierarchies can be selected.

If you restrict characteristics to specific hierarchies or select presentation hierarchies, you can also use hierarchy variables.

· Hierarchy Node Variables

Hierarchy node variables represent a node in a hierarchy and can be used wherever hierarchy nodes are used.

If you restrict characteristics to specific hierarchy nodes, you can also use hierarchy node variables.

· Text Variables

Text variables represent a text and can be used in descriptions of queries, calculated key figures and structural components.

key performance indicators

If you are struggling to define your company's key performance indicators (KPIs), here is a useful bit of information. I recently discovered an interesting website dedicated to identifying KPIs for just about every category you can think of. And it is FREE!

Yes, indeed -- a website described as "The free Key Performance Indicator (KPI) Library is a community of business professionals that provides guidance in identifying and prioritizing the KPIs that really matter for your organization's success." The categories include business, compliance & legislation, environmental, finance, HR, IT, outsourcing, procurement, project portfolio, R & D, supply chain & logistics, and many others. Warning -- you have to register to use the site but I found it quite useful.

The site contains (at the time of this posting) 943 KPIs. Here are a few by name:

Market share gain comparison %
Ad click-through ratio (CTR)
Cash dividends paid
Share price
Perfect Order Measure
Average customer recency
Average number of trackbacks per post
Number of past due loans
% of service requests posted via web (self-help)
Total energy used per unit of production
Cumulative Annual Growth Rate (CAGR)

With each entry, you get a definition, the category it belongs to, and the ability to share it using any number of book marking applications. You can also comment on the KPI and add your own to the listing. As an example, here is the entry for Cumulative Annual Growth Rate (CAGR):

"This tells the story about any company as to what rate the company has grown over years irrespective of consistency in growth YoY basis. A company might have one successful year and then a bad year. If you compare the growth rate YoY basis it may give a different picture and be concluded as lack of consistency in management. But if one looks at the CAGR, it will explain the real growth over years. It is calculated as:

=Power(Revenue Year (n)/Revenue Year(1),1/n) - 1

Where, Revenue Year (n)= n-th year Revenue Revenue Year(1) = 1st year Revenue"

And on it goes...

How to retain deltas when you change LO extractor in Production system

Requirement may come up to add new fields to LO cockpit extractor which is up & running in production environment. This means the extractor is delivering daily deltas from SAP R/3 to BW system .

Since this change is to be done in R/3 Production system, there is always a risk that daily deltas of LO cockpit extractor would get disturbed. If the delta mechanism is disturbed (delta queue is broken) then there no another way than doing re-initialization for that extractor. However this re-init is not easy in terms of time & resource. Moreover no organization would be willing to provide that much downtime for live reporting based on that extractor.

As all of us know that initialization of LO Extractor is critical, resource intensive & time consuming task. Pre-requisites to perform fill setup tables are - we need to lock the users from transactional updates in R/3 system, Stop all batch jobs that update the base tables of the extractor. Then we need to schedule the setup jobs with suitable date ranges/document number ranges.

We also came across such scenario where there was a requirement to add 3 new fields to existing LO cockpit extractor 2LIS_12_VCITM. Initialization was done for this extractor 1 year back and the data volume was high.

We adopted step by step approach to minimize the risk of delta queue getting broken /disturbed. Hope this step by step procedure will help all of us who have to work out similar scenarios.

Step by Step Procedure:-

1.Carry out changes in LO Cockpit extractor in SAP R/3 Dev system.
As per the requirement add new fields to Extractor.
These new fields might be present in standard supporting structures that you get when you execute "Maintain Data source" for extractor in LBWE. If all required fields are present in supporting structure mentioned above then just add these fields using arrow buttons provided and there is no need to write user exit code to populate these new fields.
However if these fields (or some of the required fields) are not present in supporting structures then you have to go for append structure and user exit code. The coding in user exit is required to populate the newly added fields. You have to write ABAP code in User exit under CMOD & in Include ZXRSAU01.
All above changes will ask you for transport request. Assign appropriate development class/Package and assign all these objects into a transport request.

2.Carry out changes in BW Dev system for objects related to this change.
Carry out all necessary changes in BW Dev system for objects related to this change (Info source, transfer rules, ODS, Info cubes, Queries & workbooks). Assign appropriate development class/Package and assign all these objects into a transport request

3.Test the changes in QA system.
Test the new changes in SAP R/3 and BW QA systems. Make necessary changes (if needed) and include them in follow-up transports.

4.Stop V3 batch jobs for this extractor.
V3 batch jobs for this extractor are scheduled to run periodically (hourly/daily etc) Ask R/3 System Administrator to put on hold/cancel this job schedule.

5.Lock out users, batch jobs on R/3 side & stop Process chain schedule on BW.
In order to avoid the changes in database tables for this extractor and hence possible risk of loss of data, ask R/3 System Administrator to lock out the users. Also batch job schedule need to be put on hold /cancel.
Ask System Administrator to clear pending queues for this extractor (if any) in SMQ1/SMQ2. Also pending /error out v3 updates in SM58 should be processed.
On BW production system the process chain related to delta Info package for this extractor should be stopped or put on hold.

6.Drain the delta queue to Zero for this extractor.
Execute the delta Info package from BW and load the data into ODS & Info cubes. Keep executing delta Info package till you get 0 records with green light for the request on BW side. Also you should get 0 LUW entries in RSA7 for this extractor on R/3 side.

7.Import R/3 transports into R/3 Production system.
In this step we import R/3 transport request related to this extractor. This will include user exit code also. Please ensure that there is no syntax error in include ZXRSAU01 and it is active. Also ensure that objects such as append structure is active after transport.

8.Replicate the data source in BW system.
On BW production system, replicate the extractor (data source).

9.Import BW transport into BW Production system.
In this step we import BW transport related to this change into BW Production system.

10.Run program to activate transfer rules
Execute program RS_TRANSTRU_ACTIVATE_ALL. Enter the Info source and source system name and execute. This will make sure that transfer rules for this Info source are active

11.Execute V3 job Manually in R/3 side
Go to LBWE and click on Job Control for Application area related to this extractor (for 2LIS_12_VCITM it is application 12). Execute the job immediately and it should finish with no errors.

12.Execute delta Info package from BW system
Run delta Info package from BW system. Since there is no data update, this extraction request should be green with zero records (successful delta extraction)

13.Restore the schedule on R/3 & BW systems
Ask System Administrator to resume V3 update job schedule, batch job schedule and unlock the users. On BW side, restore the process chains schedule.

From next day onwards (or as per frequency), you should be able to receive the delta for this extractor with data also populated for new fields.

What Is SPRO In BW Project?


1) What is spro?


1. SPRO is the transaction code for Implementation Guide, where you can do configuration settings.
* Type spro in the transaction box and you will get a screen customizing :
Execute Project.
* Click on the SAP Reference IMG button. you will come to Display IMG Screen.
* The following path will allow you to do the configuration settings :
SAP Cutomizing Implementation Guide -> SAP Netweaver ->SAP Business Warehouse Information.
2) How to use in bw project?
2. SPRO is used to configure the following settings :
* General Settings like printer settings, fiscal year settings, ODS Object Settings, Authorisation settings, settings for displaying SAP Documents, etc., etc.,
* Links to other systems : like links between flat files and BW Systems, R/3 and BW, and other data sources, link between BW system and Microsoft Analysis services, and crystal enterprise....etc., etc.,
* UD Connect Settings : Like configuring BI Java Connectors, Establishing the RFC Desitination for SAP BW for J2EEE Engine, Installation of Availability monitoring for UD Connect.
* Automated Processes: like settings for batch processes, background processes etc., etc.,
* Transport Settings : like settings for source system name change after transport and create destination for import post-processing.
* Reporting Relevant Settings : Like Bex Settings, General Reporting Settings.
* Settings for Business Content : which is already provided by SAP.

3) What is difference between idoc and psa in transfer methods?
3. PSA : Persistant Staging Area : is a holding area of raw data. It contains detailed requests in the format of the transfer structure. It is defined according to the Datasource and source system, and is source system dependent.

IDOCS : Intermediate DOCuments : Data Structures used as API working storage for applications, which need to move data in or out of SAP Systems.

V3 update: Questions and answers

Question 1
Update records are written to the SM13, although you do not use the extractors from the logistics cockpit (LBWE) at all.
Active datasources have been accidentally delivered in a PI patch.For that reason, extract structures are set to active in the logistics cockpit. Select transaction LBWE and deactivate the active structures. From now on, no additional records are written into SM13.
If the system displays update records for application 05 (QM) in transaction SM13, even though the structure is not active, see note 393306 for a solution.

Question 2
How can I selectively delete update records from SM13?
Start the report RSM13005 for the respective module (z.B. MCEX_UPDATE_03).

* Status COL_RUN INIT: without Delete_Flag but with VB_Flag (records are updated).

* Status COL_RUN OK: with Delete_Flag (the records are deleted for all modules with COL_RUN -- OK)

With the IN_VB flag, data are only deleted, if there is no delta initialization. Otherwise, the records are updated.
MAXFBS : The number of processed records without Commit.

ATTENTION: The delta records are deleted irrevocably after executing report RSM13005 (without flag IN_VB). You can reload the data into BW only with a new delta-initialization!

Question 3
What can I do when the V3 update loops?
Refer to Note 0352389. If you need a fast solution, simply delete all entries from SM13 (executed for V2), however, this does not solve the actual problem.

ATTENTION: THIS CAUSES DATA LOSS. See question 2 !

Question 4
Why has SM13 not been emptied even though I have started the V3 update?

* The update record in SM13 contains several modules (for example, MCEX_UPDATE_11 and MCEX_UPDATE_12). If you start the V3 update only for one module, then the other module still has INIT status in SM13 and is waiting for the corresponding collective run. In some cases, the entry might also not be deleted if the V3 update has been started for the second module.In this case, schedule the request RSM13005 with the DELETE_FLAG (see question 2).

* V3 updating no longer functions after the PI upgrade because you did not load all the delta records into the BW system prior to the upgrade.Proceed as described in note 328181.

Question 5
The entries from SM13 have not been retrieved even though I followed note 0328181!
Check whether all entries were actually deleted from SM13 for all clients. Look for records within the last 25 years with user * .

Question 6
Can I schedule V3 update in parallel?
The V3 update already uses collective processing.You cannot do this in parallel.

Question 7
The Logistics Cockpit extractors deliver incorrect numbers. The update contains errors !
Have you installed the most up-to-date PI in your OLTP system?
You should have at least PI 2000.1 patch 6 or PI 2000.2 patch 2.

Question 8
Why has no data been written into the delta queue even though the V3 update was executed successfully?
You have probably not started a delta initialization. You have to start a delta initialization for each DataSource from the BW system before you can load the delta.Check in RSA7 for an entry with a green status for the required DataSource. Refer also to Note 0380078.

Question 9
Why does the system write data into the delta queue, even though the V3 update has not been started?
You are using the automatic goods receipt posting (transaction MRRS) and start this in the background.In this case the system writes the records for DataSources of application 02 directly into the delta queue (RSA7).This does not cause double data records.This does not result in any inconsistencies.

Question 10
Why am I not able to carry out a structural change in the Logistics Cockpit although SM13 is blank?
Inconsistencies occurred in your system. There are records in update table VBMOD for which there are no entries in table VBHDR. Due to those missing records, there are no entries in SM13. To remove the inconsistencies, follow the instructions in the solution part of Note 67014. Please note that no postings must be made in the system during reorganization in any case!

Question 11
Why is it impossible to plan a V3 job from the Logistics Cockpit?
The job always abends immediately. Due to missing authorizations, the update job cannot be planned. For further information see Note 445620.

What is designer and creation of Universe?

What is Designer?

Designer is a BusinessObjects IS module used by universe designers to create and maintain universes. Universes are the semantic layer that isolates end users from the technical issues of the database structure. Universe designers can distribute universes to end users by moving them as files through the file system, or by exporting them to the repository.

BO Universe is essentially a connection layer sitting between the source data and the DW. It is defined by the data mapping or schema or the relationship between database tables. Each universe is accessed by certain category of users. For example, finance people will access finance universe, sales people will access sales universe. The analogy is similar to a data mart.

The advantge of the BO universe is that if there are any changes in the source data structure, this change needs to be made only in the Universe and its effect gets pushed down to all the reports emanating from this universe. A good universe design helps is improving speed and contributes to the Best Practices using BO.

How do you design a universe?

The design method consists of two major phases.
During the first phase, you create the underlying database structure of your universe. This structure includes the tables and columns of a database and the joins by which they are linked. You may need to resolve loops which occur in the joins using aliases or contexts. You can conclude this phase by testing the integrity of the overall structure.
During the second phase, you can proceed to enhance the components of your universe. You can also prepare certain objects for multidimensional analysis. As with the first phase, you should test the integrity of your universe structure. Finally, you can distribute your universes to users by exporting them to the repository or via your file system.

Need to delete data...? for enhancing datasource..?

No need to delete any data in BW or R/3 side for enhancing datasource if loading data into ODS in overwrite mode.

Simply follow the below steps:

1. Add new fields to ODS & cubes and adjust update rules.

2. Clear LBWQ(update queue - Run V3 job only).

3. Clear RSA7(Delta queue - Run infopak to pull data into bw)

4. Move datasource changes to Production, replicate and activate transfer rules.

5. Delete data in LBWG(setup tables).

6. Fill setup tables for historic data.

7. Initialize datasource if required, without data transfer(Zero initialization).

8. Pull data from R/3 to BW ODS in overwrite mode with Repair Full option.

Loading data in overwrite mode, so no problem, Just load again historic data as well. and push delta from ODS to further(ods/cube).

9. Push delta from ODS to CUBE.

Tuesday, December 1, 2009

To create a DBLINK using Derived Tables

1. Create a DBLINK in Oracle on Server1 with the following statement:
2. CREATE DATABASE LINK dblink_name CONNECT TO user_name_on_server2 IDENTIFIED BY password USING 'connect_string_to_server2';
3. Create a synonym for the DBLINK on Server1 using the following statement:
4. CREATE SYNONYM synonym_name FOR user_name_on_server2.table_name_on_server2@dblink_name_server2
5. Ensure the synonym for the linked database on Server1 is added to the tnsnames.ora file of the target database on Server2. If not, Oracle will return the ORA-12154 error message.
6. Log in to Server1.
7. Query the DBLINK synonym using the following SQLPlus statement:
8. SELECT * FROM synonym_name
9. Log into Designer.
10. Click Insert Table > Derived Tables.
11. Query the DBLINK synonym using the following SQLPlus statement:
12. SELECT * FROM synonym_name
13. If an error is returned, close the Derived Tables dialog box and reopen. If the message "Parse OK" is returned, then click OK.