SAP BW .....all info @ one place
SAP BW relevant Information
Loading

Question 1
Update records are written to the SM13, although you do not use the extractors from the logistics cockpit (LBWE) at all.
Active datasources have been accidentally delivered in a PI patch.For that reason, extract structures are set to active in the logistics cockpit. Select transaction LBWE and deactivate the active structures. From now on, no additional records are written into SM13.
If the system displays update records for application 05 (QM) in transaction SM13, even though the structure is not active, see note 393306 for a solution.

Question 2
How can I selectively delete update records from SM13?
Start the report RSM13005 for the respective module (z.B. MCEX_UPDATE_03).

  • Status COL_RUN INIT: without Delete_Flag but with VB_Flag (records are updated).
  • Status COL_RUN OK: with Delete_Flag (the records are deleted for all modules with COL_RUN -- OK)

With the IN_VB flag, data are only deleted, if there is no delta initialization. Otherwise, the records are updated.
MAXFBS : The number of processed records without Commit.

ATTENTION: The delta records are deleted irrevocably after executing report RSM13005 (without flag IN_VB). You can reload the data into BW only with a new delta-initialization!

Question 3
What can I do when the V3 update loops?
Refer to Note 0352389. If you need a fast solution, simply delete all entries from SM13 (executed for V2), however, this does not solve the actual problem.

ATTENTION: THIS CAUSES DATA LOSS. See question 2 !

Question 4
Why has SM13 not been emptied even though I have started the V3 update?

  • The update record in SM13 contains several modules (for example, MCEX_UPDATE_11 and MCEX_UPDATE_12). If you start the V3 update only for one module, then the other module still has INIT status in SM13 and is waiting for the corresponding collective run. In some cases, the entry might also not be deleted if the V3 update has been started for the second module.In this case, schedule the request RSM13005 with the DELETE_FLAG (see question 2).
  • V3 updating no longer functions after the PI upgrade because you did not load all the delta records into the BW system prior to the upgrade.Proceed as described in note 328181.
Question 5
The entries from SM13 have not been retrieved even though I followed note 0328181!
Check whether all entries were actually deleted from SM13 for all clients. Look for records within the last 25 years with user * .

Question 6
Can I schedule V3 update in parallel?
The V3 update already uses collective processing.You cannot do this in parallel.

Question 7
The Logistics Cockpit extractors deliver incorrect numbers. The update contains errors !
Have you installed the most up-to-date PI in your OLTP system?
You should have at least PI 2000.1 patch 6 or PI 2000.2 patch 2.

Question 8
Why has no data been written into the delta queue even though the V3 update was executed successfully?
You have probably not started a delta initialization. You have to start a delta initialization for each DataSource from the BW system before you can load the delta.Check in RSA7 for an entry with a green status for the required DataSource. Refer also to Note 0380078.

Question 9
Why does the system write data into the delta queue, even though the V3 update has not been started?
You are using the automatic goods receipt posting (transaction MRRS) and start this in the background.In this case the system writes the records for DataSources of application 02 directly into the delta queue (RSA7).This does not cause double data records.This does not result in any inconsistencies.

Question 10
Why am I not able to carry out a structural change in the Logistics Cockpit although SM13 is blank?
Inconsistencies occurred in your system. There are records in update table VBMOD for which there are no entries in table VBHDR. Due to those missing records, there are no entries in SM13. To remove the inconsistencies, follow the instructions in the solution part of Note 67014. Please note that no postings must be made in the system during reorganization in any case!

Question 11
Why is it impossible to plan a V3 job from the Logistics Cockpit?
The job always abends immediately. Due to missing authorizations, the update job cannot be planned. For further information see Note 445620.

 

Questions and answers related to T-Code: RSA7(Delta Queue)

This note maintained here for my quick reference and for those dont have SAP Notes access :-)

Question 1:
What does the number in the 'Total' column in Transaction RSA7 mean?
Answer:
The 'Total' column displays the number of LUWs that were written in the delta queue and that have not yet been confirmed. The number includes the LUWs of the last delta request (for repeating a delta request) and the LUWs for the next delta request. An LUW only disappears from the RSA7 display when it has been transferred to the BW System and a new delta request has been received from the BW System.

Question 2:
What is an LUW in the delta queue?
Answer:
An LUW from the point of view of the delta queue can be an individual document, a group of documents from a collective run or a whole data packet from an application extractor.

Question 3:
Why does the number in the 'Total' column, in the overview screen of Transaction RSA7, differ from the number of data records that are displayed when you call up the detail view?
Answer:
The number on the overview screen corresponds to the total number of LUWs (see also question 1) that were written to the qRFC queue and that have not yet been confirmed. The detail screen displays the records contained in the LUWs. Both the records belonging to the previous delta request and the records that do not meet the selection conditions of the preceding delta init requests are filtered out. This means that only the records that are ready for the next delta request are displayed on the detail screen. The detail screen of Transaction RSA7 does not take into account a possibly existing customer exit.

Question 4:
Why does Transaction RSA7 still display LUWs on the overview screen after successful delta loading?
Answer:
Only when a new delta has been requested does the source system learn that the previous delta was successfully loaded into the BW System. The LUWs of the previous delta may then be confirmed (and also deleted). In the meantime, the LUWs must be kept for a possible delta request repetition. In particular, the number on the overview screen does not change if the first delta is loaded into the BW System.

Question 5:
Why are selections not taken into account when the delta queue is filled?
Answer:
Filtering according to selections takes place when the system reads from the delta queue. This is necessary for performance reasons.

Question 6:
Why is there a DataSource with '0' records in RSA7 if delta exists and has been loaded successfully?
Answer:
It is most likely that this is a DataSource that does not send delta data to the BW System via the delta queue but directly via the extractor . You can display the current delta data for these DataSources using TA RSA3 (update mode ='D')

Question 7:
Do the entries in Table ROIDOCPRMS have an impact on the performance of the loading procedure from the delta queue?
Answer:
The impact is limited. If performance problems are related to the loading process from the delta queue, then refer to the application-specific notes (for example in the CO-PA area, in the logistics cockpit area, and so on).
Caution: As of PlugIn 2000.2 patch 3, the entries in Table ROIDOCPRMS are as effective for the delta queue as for a full update. Note, however, that LUWs are not split during data loading for consistency reasons. This means that when very large LUWs are written to the delta queue, the actual package size may differ considerably from the MAXSIZE and MAXLINES parameters.

Question 8:
Why does it take so long to display the data in the delta queue (for example approximately 2 hours)?
Answer:
With PlugIn 2001.1 the display was changed: you are now able to define the amount of data to be displayed, to restrict it, to selectively choose the number of a data record, to make a distinction between the 'actual' delta data and the data intended for repetition, and so on.

Question 9:
What is the purpose of the function 'Delete Data and Meta Data in a Queue' in RSA7? What exactly is deleted?
Answer:
You should act with extreme caution when you use the delete function in the delta queue. It is comparable to deleting an InitDelta in the BW System and should preferably be executed there. Not only do you delete all data of this DataSource for the affected BW System, but you also lose all the information concerning the delta initialization. Then you can only request new deltas after another delta initialization.
When you delete the data, this confirms the LUWs kept in the qRFC queue for the corresponding target system. Physical deletion only takes place in the qRFC outbound queue if there are no more references to the LUWs.
The delete function is intended for example, for cases where the BW System, from which the delta initialization was originally executed, no longer exists or can no longer be accessed.

Question 10:
Why does it take so long to delete from the delta queue (for example half a day)?
Answer:
Import PlugIn 2000.2 patch 3. With this patch the performance during deletion improves considerably.

Question 11:
Why is the delta queue not updated when you start the V3 update in the logistics cockpit area?
Answer:
It is most likely that a delta initialization had not yet run or that the the delta initialization was not successful. A successful delta initialization (the corresponding request must have QM status 'green' in the BW System) is a prerequisite for the application data to be written to the delta queue.

Question 12:
What is the relationship between RSA7 and the qRFC monitor (Transaction SMQ1)?
Answer:
The qRFC monitor basically displays the same data as RSA7. The internal queue name must be used for selection on the initial screen of the qRFC monitor. This is made up of the prefix 'BW, the client and the short name of the DataSource. For DataSources whose name is shorter than 20 characters, the short name corresponds to the name of the DataSource. For DataSources whose name is longer than 19 characters (for delta-capable DataSources only possible as of PlugIn 2001.1) the short name is assigned in Table ROOSSHORTN.
In the qRFC monitor you cannot distinguish between repeatable and new LUWs. Moreover, the data of a LUW is displayed in an unstructured manner there.

Question 13:
Why is there data in the delta queue although the V3 update has not yet been started?
Answer:
You posted data in the background. This means that the records are updated directly in the delta queue (RSA7). This happens in particular during automatic goods receipt posting (MRRS). There is no duplicate transfer of records to the BW system. See Note 417189.

Question 14:
Why does the 'Repeatable' button on the RSA7 data details screen not only show data loaded into BW during the last delta but also newly-added data, in other words, 'pure' delta records?
Answer:
It was programmed so that the request in repeat mode fetches both actually repeatable (old) data and new data from the source system.

Question 15:
I loaded several delta inits with various selections. For which one
is the delta loaded?
Answer:
For delta, all selections made via delta inits are summed up. This
means a delta for the 'total' of all delta initializations is loaded.

Question 16:
How many selections for delta inits are possible in the system?
Answer:
With simple selections (intervals without complicated join conditions or single values), you can make up to about 100 delta inits. It should not be more.
With complicated selection conditions, it should be only up to 10-20 delta inits.
Reason: With many selection conditions that are joined in a complicated way, too many 'where' lines are generated in the generated ABAP source code which may exceed the memory limit.

Question 17:
I intend to copy the source system, i.e. make a client copy. What will happen with may delta? Should I initialize again after that?
Answer:
Before you copy a source client or source system, make sure that your deltas have been fetched from the delta queue into BW and that no delta is pending. After the client copy, an inconsistency might occur between BW delta tables and the OLTP delta tables as described in Note 405943. After the client copy, Table ROOSPRMSC will probably be empty in the OLTP since this table is client-independent. After the system copy, the table will contain the entries with the old logical system name which are no longer useful for further delta loading from the new logical system. The delta must be initialized in any case since delta depends on both the BW system and the source system. Even if no dump 'MESSAGE_TYPE_X' occurs in BW when editing or creating an InfoPackage, you should expect that the delta has to be initialized after the copy.

Question 18.
Am I permitted to use the functions in Transaction SMQ1 to manually control processes?
Answer:
Use SMQ1 as an instrument for diagnosis and control only. Make changes to BW queues only after informing BW Support or only if this is explicitly requested in a note for Component 'BC-BW' or 'BW-WHM-SAPI'.

Question 19.
Despite the delta request only being started after completion of the collective run (V3 update), it does not contain all documents. Only another delta request loads the missing documents into BW. What is the cause for this "splitting"?
Answer:
The collective run submits the open V2 documents to the task handler for processing. The task handler processes them in one or several parallel update processes in an asynchronous way. For this reason, plan a sufficiently large "safety time window" between the end of the collective run in the source system and the start of the delta request in BW. An alternative solution where this problem does not occur is described in Note 505700.

Question 20.
Despite deleting the delta init, LUWs are still written into the DeltaQueue
Answer:
In general, delta initializations and deletions of delta inits should always be carried out at a time when no posting takes place. Otherwise, buffer problems may occur: If you started the internal mode at a time when the delta initialization was still active, you post data into the queue even though the initialization had been deleted in the meantime. This is the case in your system.

Question 21.
In SMQ1 (qRFC Monitor) I have status 'NOSEND'. In the Table TRFCQOUT, some entries have the status 'READY', others 'RECORDED'. ARFCSSTATE is 'READ'. What do these statuses mean? Which values in the field 'Status' mean what and which values are correct and which are alarming? Are the statuses BW-specific or generally valid in qRFC?
Answer:
Table TRFCQOUT and ARFCSSTATE: Status READ means that the record was read once either in a delta request or in a repetition of the delta request. However, this still does not mean that the record has successfully reached the BW. The status READY in the TRFCQOUT and RECORDED in the ARFCSSTATE means that the record has been written into the delta queue and will be loaded into the BW with the next delta request or a repetition of a delta. In any case only the statuses READ, READY and RECORDED in both tables are considered to be valid. The status EXECUTED in TRFCQOUT can occur temporarily. It is set before starting a delta extraction for all records with status READ present at that time. The records with status EXECUTED are usually deleted from the queue in packages within a delta request directly after setting the status before extracting a new delta. If you see such records, it means that either a process which confirms and deletes records loaded into the BW is successfully running at the moment, or, if the records remain in the table for a longer period of time with status EXECUTED, it is likely that there are problems with deleting the records which have already been successfully been loaded into the BW. In this state, no more deltas are loaded into the BW. Every other status indicates an error or an inconsistency. NOSEND in SMQ1 means nothing (see note 378903). However the value 'U' in field 'NOSEND' of table TRFCQOUT is of concern.

Question 22.
The extract structure was changed when the delta queue was empty. Afterwards new delta records were written to the delta queue. When loading the delta into the PSA, it shows that some fields were moved. The same result occurs when the contents of the delta queue are listed via the detail display. Why is the data displayed differently? What can be done?
Answer:
Make sure that the change of the extract structure is also reflected in the database and that all servers are synchronized. We recommend resetting the buffers using Transaction $SYNC. If the extract structure change is not communicated synchronously to the server where delta records are being created, the records are written with the old structure until the new structure has been generated. This may have disastrous consequences for the delta. When the problem occurs, the delta needs to be re-initialized.

Question 23.
How and where can I control whether a repeat delta is requested?
Answer:
Via the status of the last delta in the BW Request Monitor. If the request is RED, the next load will be of type 'Repeat'. If you need to repeat the last load for any reason, manually set the request in the monitor to red. For the contents of the repeat, see Question 14. Delta requests set to red when data is already updated lead to duplicate records in a subsequent repeat, if they have not already been deleted from the data targets concerned.

Question 24.
As of PI 2003.1, the Logistic Cockpit offers various types of update methods. Which update method is recommended in logistics? According to which criteria should the decision be made? How can I choose an update method in logistics?
Answer:
See the recommendation in Note 505700.

Question 25.
Are there particular recommendations regarding the maximum data volume of the delta queue to avoid danger of a read failure due to memory problems?
Answer:
There is no strict limit (except for the restricted number area of the 24-digit QCOUNT counter in the LUW management table - which is of no practical importance, however - or the restrictions regarding the volume and number of records in a database table).
When estimating "soft" limits, both the number of LUWs and the average data volume per LUW are important. As a rule, we recommend bundling data (usually documents) as soon as you write to the delta queue to keep number of LUWs low (this can partly be set in the applications, for example in the Logistics Cockpit). The data volume of a single LUW should not be much larger than 10% of the memory available to the work process for data extraction (in a 32-bit architecture with a memory volume of about 1 GByte per work process, 100 MByte per LUW should not be exceeded). This limit is of rather small practical importance as well since a comparable limit already applies when writing to the delta queue. If the limit is observed, correct reading is guaranteed in most cases.
If the number of LUWs cannot be reduced by bundling application transactions, you should at least make sure that the data is fetched from all connected BWs as quickly as possible. But for other, BW-specific, reasons, the frequency should not exceed one delta request per hour.
To avoid memory problems, a program-internal limit ensures that no more than 1 million LUWs are ever read and fetched from the database per delta request. If this limit is reached within a request, the delta queue must be emptied by several successive delta requests. We recommend, however, to try not to reach that limit but trigger the fetching of data from the connected BWs as soon as the number of LUWs reaches a 5-digit value.

---> Some more related Notes....
873694 - Consulting: Delta repeat and status in monitor/data target
771894 - No data during delta upload: Selection on Z* fields
723935 - Adding the TID display to the DeltaQueue monitor
691721 - Restoring lost data from a delta request
576896 - Checks when PSA contains incorrect data for delta requests

574601 - BW-SAPI: Endless loop when confirming qRFC LUWs
417307 - Extractor package size: Collective note for applications
417189 - BW/SAPLEINS - Online update of delta queue
405943 - Calling an InfoPackage in BW causes short dump
377732 - Collective SAP note SAP BW BCT 2.1C for EBP 2.0 and 3.0

 

SAP BW Document Downloads

Posted In: , , , . By Srinivas Neelam

Documents for sap business warehouse from other web sites

SAP BW BPS Advanced Budgeting Example which gives a glance of BPS Functionality
SAP BW BPS Advanced Budgeting Example

Sap BW Authorizations docs in detail
Sap BW Authorizations

Sap BW BPS Planning Folders and Layouts
Sap BW BPS Planning Folders and Layout

Sap BW Best practices overview Presentation
Sap BW Best practices overview

How to integrate Sap BI with XI PDF
How to integrate Sap BI with XI

Sap BW Netweaver Installation MEDIA_LIST_NETWEAVER
Sap BW Netweaver Installation MEDIA_LIST_NETWEAVER

Online Analytical Processing(OLAP)
Online Analytical Processing(OLAP)

Sap BW Sizing Help for estimating the hardware resources needed
Sap BW Sizing Help

Sap BW Business Explorer(BEX) a detail learning document
Sap BW Business Explorer(BEX)

ABAP Required in Sap BW for ABAP routines,Functional modules in detail
ABAP Required in Sap BW

Sap BI Accelerator for high performance in queries
Sap BI Accelerator

Sap BW Cell editing in Bex
Sap BW Cell editing in Bex

Control and profitability Analysis in SAP BW
Sap BW copa

Exit Function in Sap BW
Exit Function in Sap BW

Sap BW Front End Designing
Sap BW Front End Designing

Sap BW Installation Guide
Sap BW Installation Guide

How to handle inventory in SAP BW
How to handle inventory in SAP BW

Sap BW Transaction Codes (t-codes)
Sap BW Transaction Codes

Sap BW BPS WEB Based Planning
Sap BW BPS WEB Based Planning

 

LO Extraction

Posted In: , , , , , , , , , , , , , , , , . By Srinivas Neelam

1. Go to transaction code RSA3 and see if any data is available related to your DataSource. If data is there in RSA3 then go to transaction code LBWG (Delete Setup data) and delete the data by entering the application name.

2. Go to transaction SBIW –> Settings for Application Specific Datasource –> Logistics –> Managing extract structures –> Initialization –> Filling the Setup table –> Application specific setup of statistical data –> perform setup (relevant application)

3. In OLI*** (for example OLI7BW for Statistical setup for old documents : Orders) give the name of the run and execute. Now all the available records from R/3 will be loaded to setup tables.

4. Go to transaction RSA3 and check the data.

5. Go to transaction LBWE and make sure the update mode for the corresponding DataSource is serialized V3 update.

6. Go to BW system and create infopackage and under the update tab select the initialize delta process. And schedule the package. Now all the data available in the setup tables are now loaded into the data target.

7. Now for the delta records go to LBWE in R/3 and change the update mode for the corresponding DataSource to Direct/Queue delta. By doing this record will bypass SM13 and directly go to RSA7. Go to transaction code RSA7 there you can see green light # Once the new records are added immediately you can see the record in RSA7.

8. Go to BW system and create a new infopackage for delta loads. Double click on new infopackage. Under update tab you can see the delta update radio button.

9. Now you can go to your data target and see the delta

Some more info @

https://www.sdn.sap.com/irj/sdn/weblogs?blog=/pub/wlg/1096

https://www.sdn.sap.com/irj/sdn/weblogs?blog=/pub/wlg/1106

https://www.sdn.sap.com/irj/sdn/weblogs?blog=/pub/wlg/1183

https://www.sdn.sap.com/irj/sdn/weblogs?blog=/pub/wlg/1262

https://www.sdn.sap.com/irj/sdn/weblogs?blog=/pub/wlg/1522

 

Some times our requirement comes like we can not use any of standard scheduling functions available to start a process chain or an infopackage or a Job.

Ex: Process chain has to start at 3 different Odd times in a day(like: 7, 12, 20 hours).
Or some chains has to be started based on some other loads success or failure...
Or we have to use same process chain multiple times instead of including more infopacakges...etc

For above requirements, to control process chain effectively it better to start Process chain or Job or infopackage through an Event.

T Code : SM62 --> To Create an Event.
T Code : SE38 --> To Create an ABAP Program
Use Function Module : BP_EVENT_RAISE to raise an event.

Scenario: First day of week weekly load has to start first. Remaining days in weeks daily load has to run 2 different times in a day(ex: 6 and 20 hours)

Steps:

1. Create a Process Chain1 and Schedule it based on event and check periodic check box.

2. Create a Process Chain2 and include below ABAP program and schedule it for every one hour(24 X7) periodically. So this chain will trigger Process Chain1, based on conditions specified in ABAP Program.

3. See below doc : How to ... Integrate ABAP Program in Process Chain

https://www.sdn.sap.com/irj/servlet/prt/portal/prtroot/docs/library/uuid/3507aa90-0201-0010-6891-d7df8c4722f7

Sample Coding:
SELECTION-SCREEN BEGIN OF BLOCK Daily_Loads WITH FRAME TITLE TEXT-001.
SELECTION-SCREEN COMMENT /1(50) COMM0001.
Parameters : Dly_Ld1(2) type c.
Parameters : Dly_Ld2(2) type c.
SELECTION-SCREEN END OF BLOCK Daily_Loads.


SELECTION-SCREEN BEGIN OF BLOCK Weekly_Load WITH FRAME TITLE TEXT-002.
SELECTION-SCREEN COMMENT /1(50) COMM0002.
Parameters : Wkly_Ld1(2) type c.
SELECTION-SCREEN END OF BLOCK Weekly_Load.


Data : hour(2) type c.
Data: l_cweek type /BI0/OICALWEEK.
Data: l_fdate type /BIC/OISSS_DATE.
Data: l_date type /bic/OISSS_DATE.


INITIALIZATION.
MOVE 'Input Daily Load Timings in 24Hour Format' TO COMM0001.
MOVE 'Input Weekly Load Timings in 24Hour Format' TO COMM0002.
clear hour.
hour = sy-timlo(2).


****Code to get First day of Fiscal Week
CALL FUNCTION 'ZBW_DATE_GET_FISCAL_WEEK'
EXPORTING
DATE = sy-datum
IMPORTING
WEEK = l_cweek.


CALL FUNCTION 'WEEK_GET_FIRST_DAY'
EXPORTING
WEEK = l_cweek
IMPORTING
DATE = l_fdate .
move sy-datum to l_date.


START-OF-SELECTION.
if l_fdate = l_date and hour = Wkly_Ld1.

CALL FUNCTION 'BP_EVENT_RAISE'
EXPORTING
EVENTID = 'SSS_OBSS_WEEKLY_LOAD'
* EVENTPARM = ' '
* TARGET_INSTANCE = ' '
* EXCEPTIONS
* BAD_EVENTID = 1
* EVENTID_DOES_NOT_EXIST = 2
* EVENTID_MISSING = 3
* RAISE_FAILED = 4
* OTHERS = 5.

elseif hour = Dly_Ld1 or hour = Dly_Ld2.

CALL FUNCTION 'BP_EVENT_RAISE'
EXPORTING
EVENTID = 'SSS_OB_SALES_DATA_LOAD'
* EVENTPARM = ' '
* TARGET_INSTANCE = ' '
* EXCEPTIONS
* BAD_EVENTID = 1
* EVENTID_DOES_NOT_EXIST = 2
* EVENTID_MISSING = 3
* RAISE_FAILED = 4
* OTHERS = 5 .
IF SY-SUBRC <> 0.
* MESSAGE ID SY-MSGID TYPE SY-MSGTY NUMBER SY-MSGNO
* WITH SY-MSGV1 SY-MSGV2 SY-MSGV3 SY-MSGV4.
ENDIF.


While execution we can specify Daily load and weekly load times in selection screen(as below). We can create variant and assign in process chain to automate.
Provide values in selection screen and save a variant.

 

Delta Loads not possible after FULL loads when we are loding data into ODS.

In order to run Delta's we have to load data in "Repaire Full mode" or we can convert Full loads into Repaire Full loads using standard ABAP program.

You can set repaire full flat from Infopackage menu -->> Scheduler --> Repair Full request --> check the check box as shown in below.
If already Full Loads are available then we need to start Delta loads, then we need to convert Full Loads to Repair full loads to start Delta loads.

Use Program : RSSM_SET_REPAIR_FULL_FLAG to convert Full loads to Repair Full.

Steps:

1. Go to T Code: SE38 or SA38 and provide program name(RSSM_SET_REPAIR_FULL_FLAG) and execute.

2. we can see below screen, provide required ODS, datasource and source system names and execute.

3. we can see all available Full requests in ODS.

4. Choose required requests and click on "Change all Requests to Repair Full".

 

Introduction to ASAP Methodology
AcceleratedSAP methodology is proven, repeatable and successful approach to implement SAP solutions across industries and customer environments.
It provides content, tools and expertise from thousands of successful implementations.

Phase 1: Project Preparation :
During this phase the team goes through initial planning and preparation for SAP project.
Define project goals and objectives
Clarify the scope of implementation
Define project schedule, budget plan, and implementation sequence
Establish the project organization and relevant committees and assign resources

Phase 2: Business Blueprint
The purpose of this phase is to achieve a common understanding of how the company intends to run SAP to support their business. Also, to refine the original project goals and objectives and revise the overall project schedule in this phase. The result is the Business Blueprint, a detailed documentation of the results gathered during requirements workshops.

Phase 3: Realization:
The purpose of this phase is to implement all the business process requirements based on the Business Blueprint. The system configuration methodology is provided in two work packages: Baseline (major scope); and Final configuration (remaining scope). Other key focal areas of this phase are conducting integration tests and drawing up end user documentation.

Phase 4: Final Preparation:
The purpose of this phase is to complete the final preparation (including testing, end user training, system management and cutover activities) to finalize your readiness to go live. The Final Preparation phase also serves to resolve all critical open issues. On successful completion of this phase, you are ready to run your business in your live SAP System.

Phase 5: Go Live & Support:
The purpose of this phase is to move from a project-oriented, pre-production environment to live production operation. The most important elements include setting up production support, monitoring system transactions, and optimizing overall system performance.

Web-Links for SAP Project Management, ASAP Methodology & Solution Manager

 

1.Create an Infopackage 2. Go to selections tab and choose Type: 6 – ABAP Routine.You can see following available options(F4 Help).

3. Give disruption, and hit enter, now you will move to following screen. 4. Write Code between begin of Routine and End of Routine.

5. See below sample code to select date range from Previous 6 days to Current date.

6. L_T_Range table is of Type structure RSSDLRANGE.
a. RSSDLRANGE contains SIGN, OPTION, LOW, HIGH
We need to populate these fields to pass range dynamically.
Sample Code:
****$*$ begin of routine - insert your code only below this line *-*
Data: l_idx like sy-tabix.
Data: date_low like sy-datum.
Date_low = sy-datum – 6.”(To get 6 days back).
read table l_t_range with key
fieldname = 'CRDAT'.
l_idx = sy-tabix.
*** Pass Range values to L_T_Range Table.
Move date_low to L_T_Range -Low.
Move sy-datum to L_T_Range -High.
L_T_Range -Sign = ‘I’. *****(Here: I – Include, E – Exclude)
L_T_Range -Option = ‘BT’.****( Here: BT – Between )
modify l_t_range index l_idx.
p_subrc = 0.

***$*$ end of routine - insert your code only before this line *-*

7. Syntax check and Save.

 

Requirement may come up to add new fields to LO cockpit extractor which is up & running in production environment. This means the extractor is delivering daily deltas from SAP R/3 to BW system .Since this change is to be done in R/3 Production system, there is always a risk that daily deltas of LO cockpit extractor would get disturbed. If the delta mechanism is disturbed (delta queue is broken) then there no another way than doing re-initialization for that extractor. However this re-init is not easy in terms of time & resource. Moreover no organization would be willing to provide that much downtime for live reporting based on that extractor.
As all of us know that initialization of LO Extractor is critical, resource intensive & time consuming task. Pre-requisites to perform fill setup tables are - we need to lock the users from transactional updates in R/3 system, Stop all batch jobs that update the base tables of the extractor. Then we need to schedule the setup jobs with suitable date ranges/document number ranges.
We also came across such scenario where there was a requirement to add 3 new fields to existing LO cockpit extractor 2LIS_12_VCITM. Initialization was done for this extractor 1 year back and the data volume was high.We adopted step by step approach to minimize the risk of delta queue getting broken /disturbed. Hope this step by step procedure will help all of us who have to work out similar scenarios.
Step by Step Procedure:-
1.Carry out changes in LO Cockpit extractor in SAP R/3 Dev system.As per the requirement add new fields to Extractor.These new fields might be present in standard supporting structures that you get when you execute "Maintain Data source" for extractor in LBWE. If all required fields are present in supporting structure mentioned above then just add these fields using arrow buttons provided and there is no need to write user exit code to populate these new fields.However if these fields (or some of the required fields) are not present in supporting structures then you have to go for append structure and user exit code. The coding in user exit is required to populate the newly added fields. You have to write ABAP code in User exit under CMOD & in Include ZXRSAU01.All above changes will ask you for transport request. Assign appropriate development class/Package and assign all these objects into a transport request.
2.Carry out changes in BW Dev system for objects related to this change.Carry out all necessary changes in BW Dev system for objects related to this change (Info source, transfer rules, ODS, Info cubes, Queries & workbooks). Assign appropriate development class/Package and assign all these objects into a transport request
3.Test the changes in QA system.Test the new changes in SAP R/3 and BW QA systems. Make necessary changes (if needed) and include them in follow-up transports.
4.Stop V3 batch jobs for this extractor.V3 batch jobs for this extractor are scheduled to run periodically (hourly/daily etc) Ask R/3 System Administrator to put on hold/cancel this job schedule.
5.Lock out users, batch jobs on R/3 side & stop Process chain schedule on BW.In order to avoid the changes in database tables for this extractor and hence possible risk of loss of data, ask R/3 System Administrator to lock out the users. Also batch job schedule need to be put on hold /cancel.Ask System Administrator to clear pending queues for this extractor (if any) in SMQ1/SMQ2. Also pending /error out v3 updates in SM58 should be processed.On BW production system the process chain related to delta Info package for this extractor should be stopped or put on hold.
6.Drain the delta queue to Zero for this extractor.Execute the delta Info package from BW and load the data into ODS & Info cubes. Keep executing delta Info package till you get 0 records with green light for the request on BW side. Also you should get 0 LUW entries in RSA7 for this extractor on R/3 side.
7.Import R/3 transports into R/3 Production system.In this step we import R/3 transport request related to this extractor. This will include user exit code also. Please ensure that there is no syntax error in include ZXRSAU01 and it is active. Also ensure that objects such as append structure is active after transport.
8.Replicate the data source in BW system.On BW production system, replicate the extractor (data source).
9.Import BW transport into BW Production system.In this step we import BW transport related to this change into BW Production system.
10.Run program to activate transfer rulesExecute program RS_TRANSTRU_ACTIVATE_ALL. Enter the Info source and source system name and execute. This will make sure that transfer rules for this Info source are active
11.Execute V3 job Manually in R/3 sideGo to LBWE and click on Job Control for Application area related to this extractor (for 2LIS_12_VCITM it is application 12). Execute the job immediately and it should finish with no errors.
12.Execute delta Info package from BW systemRun delta Info package from BW system. Since there is no data update, this extraction request should be green with zero records (successful delta extraction)
13.Restore the schedule on R/3 & BW systemsAsk System Administrator to resume V3 update job schedule, batch job schedule and unlock the users. On BW side, restore the process chains schedule.From next day onwards (or as per frequency), you should be able to receive the delta for this extractor with data also populated for new fields.

 

How to Papers, related to BW direct download

Google
 

Recent Posts

SAP Jobs