Quantcast
Channel: SCN : Unanswered Discussions - SAP on Oracle
Viewing all 1304 articles
Browse latest View live

RA-00304: requested INSTANCE_NUMBER is busy in DB Instance Installation

$
0
0

Hi,experts.

 

In "Perfom ORA Post load activities" phase in DB Instance Installation of systemcopy an error occured.

 

In accordance with this wiki below, I changed initSID.ora and then restarted oracle.

However I can't solve it.

http://wiki.sdn.sap.com/wiki/display/Basis/Problems+Installing+or+System+Copy+Import+to+Oracle+11g

 

someone please help me to solve it?


■Error message in sapinstgui
RA-00304: requested INSTANCE_NUMBER is busy Disconnected .
SOLUTION: For more information, see ora_sql_results.log and the Oracle documentation.


■ora_sql_results.log
Connected to an idle instance.

ORA-00304: requested INSTANCE_NUMBER is busy
Disconnected

SAPINST: End of output of SQL executing program /oracle/'SID'/112/bin/sqlplus.

SAPINST found errors.
SAPINST The current process environment may be found in sapinst_ora_environment.log.


■sapinst_ora_environment.log
2013-09-04, 18:31:16 SAPINST Current process environment:
G_BROKEN_FILENAMES=1
HISTSIZE=1000
HOME=/root
HOSTNAME='hostname'
INPUTRC=/etc/inputrc


brspace -f tbreorg -t "*" -a cleanup command

$
0
0

Hi All

 

we have  around 124GB for stuck #$ table after online reorg fail,can you please update me how safe is command brspace -f tbreorg -t "*" -a cleanup  command in running system?, and its impact on EDI40 table?

 

Thanks

Dinesh

brarchive is being failed

$
0
0

Hi,

we have Oracle 11g and Linux redhat.

We are trying to run /sapmnt/<SID>/exe/brarchive -u / -k yes -d disk -c -sd but it is giving the error message as below.

 

BR0002I BRARCHIVE 7.20 (10)
BR0006I Start of offline redolog processing: aemalgbk.svd 2013-09-06 05.10.44
BR0484I BRARCHIVE log file: /oracle/<SID>/saparch/aemalgbk.svd
BR0280I BRARCHIVE time stamp: 2013-09-06 05.12.44
BR0301E SQL error -1031 at location BrInitOraCreate-2, SQL statement:
'CONNECT / AT PROF_CONN IN SYSOPER MODE'
ORA-01031: insufficient privileges
BR0303E Determination of Oracle version failed

BR0007I End of offline redolog processing: aemalgbk.svd 2013-09-06 05.12.44
BR0280I BRARCHIVE time stamp: 2013-09-06 05.12.44
BR0005I BRARCHIVE terminated with errors

 

We have tried sap note

Note 776505 - BR*Tools fail with ORA-01017 / ORA-01031 on Linux

but issue is same.

Please help me

Regards

Ganesh Tiwari

SAP_SLD_DATA_COLLECT getting hanged

$
0
0

Dear Experts,

we have scheduled SAP_SLD_DATA_COLLECT job via RZ70 in our ECC system.Most of the time it runs successfully but everyday

once it get hanged.

i checked SLD system also and everything is fine there. I have checked RFC   SLD_UC and SLD_NUC and both working fine.

what might be the issue kindly suggest.kindly find dev_wx file below

 

Environment:

SAP ECC6

Windows 2008 R2

Oracle 11.3

 

 

Warm Regards,

Sumit Jha

SAP Oracle Upgrade issue

$
0
0

Hello,

 

Im going through an oracle upgrade from 10.2.0.4 to 11.2.0.3, I have already done the pre-upgrade tasks with no problems, however, I'm having a dump which says that I have a data block corrupted, should I try to go through dbua or it will give me an error in upgrade or post upgrade task?  I'd like to open an OSS message however as my database is still in 10G I'd probably not get too much support.

 

Regards,

 

JAM

 

OS: AIX 7.1

Moving mirrorlog files

$
0
0

Hi,

 

I need to move the mirror log files (Oracle 11G)  from one drive to another on Windows Server 2008 R2.

 

I have gone through a couple of posts but they describe different ways of doing it.

 

I am not entirely sure as to how to proceed.

 

Thanks.

 

Best Regards,

Anita

Oracle Dataguard and SAP licensing policy

$
0
0

Hi Gurus

We are planning to setup Disaster Recovery site using oracle 11g dataguard.

Would it be possible you to answer following queries?

 

1) Is oracle dataguard is part of Oracle DVD set provided or downloaded from SAP?

2) Do we need to procure license for oracle datagueard additionally from SAP?

3) would it be possible to provide me any note number?

 

Thanks and Regards

Upendra

Oracle Offline backup and restore in tape

$
0
0

Dear all,

 

Earlier we were taking Offline and online backup using TSM (Backup device used is util_file).

Now we are setting up Diaster Recovery  and Target System System Copy installation has stopped in "Backup Restore phase".

Please tel step by step to  triger Oracle Offline backup in tape .

Regards,

gayathri


Locking in R3

$
0
0

Hi All,

 

 

I have a doubt , oracle by itself incorporates row level locking. Since SAP R3 runs on top of it the update queries should also incorporate row level locking.

I feel row level locking dont stay for long on a database after the record has been updated.

Then how can there be 8 thousand lock entries in a given point of time in a system. These locks do stay for more than 9 to 10hrs at times.

Kindly provide some links to understand this concept. These lock entries are not following row level locking?

 

Thanks,

Swadesh

ReturnCode -1403

$
0
0

Dear All,

FI users are facing delay when they are  trying to save the data in F-29. Please check the attached trace and guide me about rectification of this error ReturnCode -1403.

while system is running on window server 2008 with Oracle 10.2.0.5

 

 

Regards,

SAP workload monitor is not showing for particular date in Total Option

$
0
0

Hi,

In t. code ST03N (Work Load Monitor) data is not showing in option total for particular date but it is showing in all the instances.

 

Please support

 

 

 

 

Regards

Ganesh Tiwari

Oracle Update BSP 11.2.0.3.0 to 11.2.0.3.7

$
0
0

Dear Experts,

 

Due to a GoLive Check recommendation, we have been tasked with the update of our Oracle patch from 11.2.0.3.0 to 11.2.0.3.7. Sadly, the installation of this patch has not gone as expected.

 

I have installed the patch as per this link following all the recommendations such as making sure everything is stopped when needed, usage of command fuser for stale sessions, updated both OPatch and MOPatch to the latest available version, and so on.

 

However, during the installation, out of the 61 patches that were supposed to be installed, only 30 were installed successfully. The remaining 31 patches either were not installed because of missing prerequisites, or conflicts. Except for 3 patches of the BSP that failed during the installation (9584028, 9458152, 14488478).

 

This 3 patches all failed with the same error. It seems it is trying to copy from a folder to another folder and it is giving a "file doesn't exist" error.

Note: A lot of people in the Internet / forums have issues throughout the installation because of authorization issues, this is NOT the case.

 

All with the similar error: Copy Action: Source file /oracle/S11/112_64/.patch_storage/9584028_Jun_22_2012_11_39_40/files/sap/ora_upgrade/post_upgrade/post_upgrade_checks.sql" does not exist. 'oracle.rdbms, 11.2.0.3.0': Cannot copy file from 'post_upgrade_checks.sql

 

The odd thing is that the patch was compiled in May 15 2013, but it is somehow generating a folder from folder Jun 22 2012....

 

I am unable to restore any longer to the backup that was taking before the update as it has been several days since the update and I cannot restore and make the consultants lose a 7 day work load. So, I wonder:

 

1.- How can I fix this issue? I mean, has anyone encountered the same problem with this patches.

2.- If it is not fixed, are this patches critical? I mean, SAP said that this patches are NOT modifying Oracle binaries so I don't think they are so critical... but are they a must?

 

Thank you for your time,

Kind regards,

PIU

Lost 000 DDIC PW for installation

$
0
0

Hello,

Now I'm doing  ERP installation with R3load.

 

I exported R3data from source system successfully, however I noticed the password of Client 000 DDIC I know is incorrect.

 

I hear that there can be a way to process installation though I don't know 000 DDIC password, but don't know its detail.

If someone know it, please help me.

 

Regards,

Naomi Yamane

Redo Log backup

$
0
0

Dear All;

 

I take every week an offline + redo log at the weekend.

 

I already used the offline backup for many things such the quality refresh, but I never used the redo log backup.

 

Can any one tell what are the redo logs are used for?

 

Best Regards

~Amal Aloun

SAP Online Backup by ArcServer

$
0
0

Hello Team,

 

I'm setting online backup to tape by ARCServer, but I'm having problem when Brtools will open the oracle file system.


I m using ECC 6.0 with oracle database on AIX system.


Anyone have some document step by step to configuration online SAP Brtool backup by Arcserver?

 

=========================================================================================

 

'/oracle/SRQ/sapdata2/sr3_5/sr3.data5'.
 
  09/11 17:42:06(18808906) -
  09/11 17:42:06(18808906) - - DSAOpenDataFile(): cannot open file '/oracle/SRQ/sapdata2/sr3_8/sr3.data8'.
 
  09/11 17:42:12(18808906) -
  09/11 17:42:12(18808906) - - DSAOpenDataFile(): cannot open file '/oracle/SRQ/sapdata3/sr3_3/sr3.data3'.
 

===============================================================================

 


Followsthe initSRQsap settings:

 

 

=========================================================================================

 

# @(#) $Id: //bas/720_REL/src/ccm/rsbr/initAIX.sap#11 $ SAP

########################################################################

#                                                                      #

# SAP BR*Tools sample profile.                                         #

# The parameter syntax is the same as for init.ora parameters.         #

# Enclose parameter values which consist of more than one symbol in    #

# double quotes.                                                       #

# After any symbol, parameter definition can be continued on the next  #

# line.                                                                #

# A parameter value list should be enclosed in parentheses, the list   #

# items should be delimited by commas.                                 #

# There can be any number of white spaces (blanks, tabs and new lines) #

# between symbols in parameter definition.                             #

# Comment lines must start with a hash character.                      #

#                                                                      #

########################################################################

# backup mode [all | all_data | full | incr | sap_dir | ora_dir

# | all_dir | <tablespace_name> | <file_id> | <file_id1>-<file_id2>

# | <generic_path> | (<object_list>)]

# default: all

backup_mode = all

# restore mode [all | all_data | full | incr | incr_only | incr_full

# | incr_all | <tablespace_name> | <file_id> | <file_id1>-<file_id2>

# | <generic_path> | (<object_list>) | partial | non_db

# redirection with '=' is not supported here - use option '-m' instead

# default: all

restore_mode = all

# backup type [offline | offline_force | offline_standby | offline_split

# | offline_mirror | offline_stop | online | online_cons | online_split

# | online_mirror | online_standby | offstby_split | offstby_mirror

# default: offline

backup_type = online

# backup device type

# [tape | tape_auto | tape_box | pipe | pipe_auto | pipe_box | disk

# | disk_copy | disk_standby | stage | stage_copy | stage_standby

# | util_file | util_file_online | util_vol | util_vol_online

# | rman_util | rman_disk | rman_stage | rman_prep]

# default: tape

backup_dev_type = util_file_online

# backup root directory [<path_name> | (<path_name_list>)]

# default: $SAPDATA_HOME/sapbackup

backup_root_dir = /oracle/SRQ/sapbackup

# stage root directory [<path_name> | (<path_name_list>)]

# default: value of the backup_root_dir parameter

stage_root_dir = /oracle/SRQ/sapbackup

# compression flag [no | yes | hardware | only | brtools]

# default: no

#compress = no

# compress command

# first $-character is replaced by the source file name

# second $-character is replaced by the target file name

# <target_file_name> = <source_file_name>.Z

# for compress command the -c option must be set

# recommended setting for brbackup -k only run:

# "compress -b 12 -c $ > $"

# no default

compress_cmd = "compress -c $ > $"

# uncompress command

# first $-character is replaced by the source file name

# second $-character is replaced by the target file name

# <source_file_name> = <target_file_name>.Z

# for uncompress command the -c option must be set

# no default

uncompress_cmd = "uncompress -c $ > $"

# directory for compression [<path_name> | (<path_name_list>)]

# default: value of the backup_root_dir parameter

compress_dir = /oracle/SRQ/sapbackup

# brarchive function [save | second_copy | double_save | save_delete

# | second_copy_delete | double_save_delete | copy_save

# | copy_delete_save | delete_saved | delete_copied]

# default: save

archive_function = save_delete

# directory for archive log copies to disk

# default: first value of the backup_root_dir parameter

archive_copy_dir = /oracle/SRQ/sapbackup

# directory for archive log copies to stage

# default: first value of the stage_root_dir parameter

archive_stage_dir = /oracle/SRQ/sapbackup

# delete archive logs from duplex destination [only | no | yes | check]

# default: only

# archive_dupl_del = only

# new sapdata home directory for disk_copy | disk_standby

# no default

# new_db_home = /oracle/C11

# stage sapdata home directory for stage_copy | stage_standby

# default: value of the new_db_home parameter

# stage_db_home = /oracle/C11

# original sapdata home directory for split mirror disk backup

# no default

# orig_db_home = /oracle/C11

# remote host name

# no default

# remote_host = <host_name>

# remote user name

# default: current operating system user

# remote_user = <user_name>

# tape copy command [cpio | cpio_gnu | dd | dd_gnu | rman | rman_gnu

# | rman_dd | rman_dd_gnu | brtools | rman_brt]

# default: cpio

tape_copy_cmd = cpio

# disk copy command [copy | copy_gnu | dd | dd_gnu | rman | rman_gnu

# | rman_set | rman_set_gnu | ocopy]

# ocopy - only on Windows

# default: copy

disk_copy_cmd = rman_set

# stage copy command [rcp | scp | ftp | wcp]

# wcp - only on Windows

# default: rcp

stage_copy_cmd = rcp

# pipe copy command [rsh | ssh]

# default: rsh

pipe_copy_cmd = rsh

# flags for cpio output command

# default: -ovB

cpio_flags = -ovB

# flags for cpio input command

# default: -iuvB

cpio_in_flags = -iuvB

# flags for cpio command for copy of directories to disk

# default: -pdcu

# use flags -pdu for gnu tools

cpio_disk_flags = -pdcu

# flags for dd output command

# default: "obs=16k"

# recommended setting:

# Unix:    "obs=nk bs=nk", example: "obs=64k bs=64k"

# Windows: "bs=nk",        example: "bs=64k"

dd_flags = "obs=64k bs=64k"

# flags for dd input command

# default: "ibs=16k"

# recommended setting:

# Unix:    "ibs=nk bs=nk", example: "ibs=64k bs=64k"

# Windows: "bs=nk",        example: "bs=64k"

dd_in_flags = "ibs=64k bs=64k"

# number of members in RMAN save sets [ 1 | 2 | 3 | 4 | tsp | all ]

# default: 1

saveset_members = 1

# additional parameters for RMAN

# following parameters are relevant only for rman_util, rman_disk or

# rman_stage: rman_channels, rman_filesperset, rman_maxsetsize,

# rman_pool, rman_copies, rman_proxy, rman_parms, rman_send

# rman_maxpiecesize can be used to split an incremental backup saveset

# into multiple pieces

# rman_channels defines the number of parallel sbt channel allocations

# rman_filesperset = 0 means:

# one file per save set - for non-incremental backups

# up to 64 files in one save set - for incremental backups

# the others have the same meaning as for native RMAN

# rman_channels = 1

# rman_filesperset = 0

# rman_maxopenfiles = 0

# rman_maxsetsize = 0      # n[K|M|G] in KB (default), in MB or in GB

# rman_maxpiecesize = 0    # n[K|M|G] in KB (default), in MB or in GB

# rman_sectionsize = 0     # n[K|M|G] in KB (default), in MB or in GB

# rman_rate = 0            # n[K|M|G] in KB (default), in MB or in GB

# rman_diskratio = 0

# rman_duration = 0        # <min> - for minimizing disk load

# rman_keep = 0            # <days> - retention time

# rman_pool = 0

# rman_copies = 0 | 1 | 2 | 3 | 4

# rman_proxy = no | yes | only

# rman_parms = "BLKSIZE=65536 ENV=(BACKUP_SERVER=HOSTNAME)"

# rman_send = "'<command>'"

# rman_send = ("channel sbt_1 '<command1>' parms='<parameters1>'",

#              "channel sbt_2 '<command2>' parms='<parameters2>'")

# rman_compress = no | yes

# rman_maxcorrupt = (<dbf_name>|<dbf_id>:<corr_cnt>, ...)

# rman_cross_check = none | archive | arch_force

# remote copy-out command (backup_dev_type = pipe)

# $-character is replaced by current device address

# no default

copy_out_cmd = "dd ibs=8k obs=64k of=$"

# remote copy-in command (backup_dev_type = pipe)

# $-character is replaced by current device address

# no default

copy_in_cmd = "dd ibs=64k obs=8k if=$"

# rewind command

# $-character is replaced by current device address

# no default

# operating system dependent, examples:

# HP-UX:   "mt -f $ rew"

# TRU64:   "mt -f $ rewind"

# AIX:     "tctl -f $ rewind"

# Solaris: "mt -f $ rewind"

# Windows: "mt -f $ rewind"

# Linux:   "mt -f $ rewind"

rewind = "tctl -f $ rewind"

# rewind and set offline command

# $-character is replaced by current device address

# default: value of the rewind parameter

# operating system dependent, examples:

# HP-UX:   "mt -f $ offl"

# TRU64:   "mt -f $ offline"

# AIX:     "tctl -f $ offline"

# Solaris: "mt -f $ offline"

# Windows: "mt -f $ offline"

# Linux:   "mt -f $ offline"

rewind_offline = "tctl -f $ offline"

# tape positioning command

# first $-character is replaced by current device address

# second $-character is replaced by number of files to be skipped

# no default

# operating system dependent, examples:

# HP-UX:   "mt -f $ fsf $"

# TRU64:   "mt -f $ fsf $"

# AIX:     "tctl -f $ fsf $"

# Solaris: "mt -f $ fsf $"

# Windows: "mt -f $ fsf $"

# Linux:   "mt -f $ fsf $"

tape_pos_cmd = "tctl -f $ fsf $"

# mount backup volume command in auto loader / juke box

# used if backup_dev_type = tape_box | pipe_box

# no default

# mount_cmd = "<mount_cmd> $ $ $ [$]"

# dismount backup volume command in auto loader / juke box

# used if backup_dev_type = tape_box | pipe_box

# no default

# dismount_cmd = "<dismount_cmd> $ $ [$]"

# split mirror disks command

# used if backup_type = offline_split | online_split | offline_mirror

# | online_mirror

# no default

# split_cmd = "<split_cmd> [$]"

# resynchronize mirror disks command

# used if backup_type = offline_split | online_split | offline_mirror

# | online_mirror

# no default

# resync_cmd = "<resync_cmd> [$]"

# additional options for SPLITINT interface program

# no default

# split_options = "<split_options>"

# resynchronize after backup flag [no | yes]

# default: no

# split_resync = no

# pre-split command

# no default

# pre_split_cmd = "<pre_split_cmd>"

# post-split command

# no default

# post_split_cmd = "<post_split_cmd>"

# pre-shut command

# no default

# pre_shut_cmd = "<pre_shut_cmd>"

# post-shut command

# no default

# post_shut_cmd = "<post_shut_cmd>"

# pre-archive command

# no default

# pre_arch_cmd = "<pre_arch_cmd> [$]"

# post-archive command

# no default

# post_arch_cmd = "<post_arch_cmd> [$]"

# pre-backup command

# no default

# pre_back_cmd = "<pre_back_cmd> [$]"

# post-backup command

# no default

# post_back_cmd = "<post_back_cmd> [$]"

# volume size in KB = K, MB = M or GB = G (backup device dependent)

# default: 1200M

# recommended values for tape devices without hardware compression:

# 60 m   4 mm  DAT DDS-1 tape:    1200M

# 90 m   4 mm  DAT DDS-1 tape:    1800M

# 120 m  4 mm  DAT DDS-2 tape:    3800M

# 125 m  4 mm  DAT DDS-3 tape:   11000M

# 112 m  8 mm  Video tape:        2000M

# 112 m  8 mm  high density:      4500M

# DLT 2000     10/20 GB:         10000M

# DLT 2000XT   15/30 GB:         15000M

# DLT 4000     20/40 GB:         20000M

# DLT 7000     35/70 GB:         35000M

# recommended values for tape devices with hardware compression:

# 60 m   4 mm  DAT DDS-1 tape:    1000M

# 90 m   4 mm  DAT DDS-1 tape:    1600M

# 120 m  4 mm  DAT DDS-2 tape:    3600M

# 125 m  4 mm  DAT DDS-3 tape:   10000M

# 112 m  8 mm  Video tape:        1800M

# 112 m  8 mm  high density:      4300M

# DLT 2000     10/20 GB:          9000M

# DLT 2000XT   15/30 GB:         14000M

# DLT 4000     20/40 GB:         18000M

# DLT 7000     35/70 GB:         30000M

tape_size = 100G

# volume size in KB = K, MB = M or GB = G used by brarchive

# default: value of the tape_size parameter

# tape_size_arch = 100G

# tape block size in KB for brtools as tape copy command on Windows

# default: 64

# tape_block_size = 64

# rewind and set offline for brtools as tape copy command on Windows

# yes | no

# default: yes

# tape_set_offline = yes

# level of parallel execution

# default: 0 - set to number of backup devices

exec_parallel = 0

# address of backup device without rewind

# [<dev_address> | (<dev_address_list>)]

# no default

# operating system dependent, examples:

# HP-UX:   /dev/rmt/0mn

# TRU64:   /dev/nrmt0h

# AIX:     /dev/rmt0.1

# Solaris: /dev/rmt/0mn

# Windows: /dev/nmt0

# Linux:   /dev/nst0

tape_address = /dev/rmt0.1

# address of backup device without rewind used by brarchive

# default: value of the tape_address parameter

# operating system dependent

# tape_address_arch = /dev/rmt0.1

# address of backup device with rewind

# [<dev_address> | (<dev_address_list>)]

# no default

# operating system dependent, examples:

# HP-UX:   /dev/rmt/0m

# TRU64:   /dev/rmt0h

# AIX:     /dev/rmt0

# Solaris: /dev/rmt/0m

# Windows: /dev/mt0

# Linux:   /dev/st0

tape_address_rew = /dev/rmt0

# address of backup device with rewind used by brarchive

# default: value of the tape_address_rew parameter

# operating system dependent

# tape_address_rew_arch = /dev/rmt0

# address of backup device with control for mount/dismount command

# [<dev_address> | (<dev_address_list>)]

# default: value of the tape_address_rew parameter

# operating system dependent

# tape_address_ctl = /dev/...

# address of backup device with control for mount/dismount command

# used by brarchive

# default: value of the tape_address_rew_arch parameter

# operating system dependent

# tape_address_ctl_arch = /dev/...

# volumes for brarchive

# [<volume_name> | (<volume_name_list>) | SCRATCH]

# no default

volume_archive = (SRQA01, SRQA02, SRQA03, SRQA04, SRQA05,

                  SRQA06, SRQA07, SRQA08, SRQA09, SRQA10,

                  SRQA11, SRQA12, SRQA13, SRQA14, SRQA15,

                  SRQA16, SRQA17, SRQA18, SRQA19, SRQA20,

                  SRQA21, SRQA22, SRQA23, SRQA24, SRQA25,

                  SRQA26, SRQA27, SRQA28, SRQA29, SRQA30)

# volumes for brbackup

# [<volume_name> | (<volume_name_list>) | SCRATCH]

# no default

volume_backup = (SRQB01, SRQB02, SRQB03, SRQB04, SRQB05,

                 SRQB06, SRQB07, SRQB08, SRQB09, SRQB10,

                 SRQB11, SRQB12, SRQB13, SRQB14, SRQB15,

                 SRQB16, SRQB17, SRQB18, SRQB19, SRQB20,

                 SRQB21, SRQB22, SRQB23, SRQB24, SRQB25,

                 SRQB26, SRQB27, SRQB28, SRQB29, SRQB30)

# expiration period in days for backup volumes

# default: 30

expir_period = 30

# recommended usages of backup volumes

# default: 100

tape_use_count = 100

# backup utility parameter file

# default: no parameter file

# null - no parameter file

# util_par_file = initSRQ.utl

# backup utility parameter file for volume backup

# default: no parameter file

# null - no parameter file

# util_vol_par_file = initSRQ.vol

# additional options for BACKINT interface program

# no default

# "" - no additional options

# util_options = "<backint_options>"

# additional options for BACKINT volume backup type

# no default

# "" - no additional options

# util_vol_options = "<backint_options>"

# path to directory BACKINT executable will be called from

# default: sap-exe directory

# null - call BACKINT without path

# util_path = <dir>|null

# path to directory BACKINT will be called from for volume backup

# default: sap-exe directory

# null - call BACKINT without path

# util_vol_path = <dir>|null

# disk volume unit for BACKINT volume backup type

# [disk_vol | sap_data | all_data | all_dbf]

# default: sap_data

# util_vol_unit = <unit>

# additional access to files saved by BACKINT volume backup type

# [none | copy | mount | both]

# default: none

# util_vol_access = <access>

# negative file/directory list for BACKINT volume backup type

# [<file_dir_name> | (<file_dir_list>) | no_check]

# default: none

# util_vol_nlist = <nlist>

# mount/dismount command parameter file

# default: no parameter file

# mount_par_file = initSRQ.mnt

# Oracle connection name to the primary database

# [primary_db = <conn_name> | LOCAL]

# no default

# primary_db = <conn_name>

# Oracle connection name to the standby database

# [standby_db = <conn_name> | LOCAL]

# no default

# standby_db = <conn_name>

# description of parallel instances for Oracle RAC

# parallel_instances = <inst_desc> | (<inst_desc_list>)

# <inst_desc_list>   - <inst_desc>[,<inst_desc>...]

# <inst_desc>        - <Oracle_sid>:<Oracle_home>@<conn_name>

# <Oracle_sid>       - Oracle system id for parallel instance

# <Oracle_home>      - Oracle home for parallel instance

# <conn_name>        - Oracle connection name to parallel instance

# Please include the local instance in the parameter definition!

# default: no parallel instances

# example for initRAC001.sap:

# parallel_instances = (RAC001:/oracle/RAC/920_64@RAC001,

# RAC002:/oracle/RAC/920_64@RAC002, RAC003:/oracle/RAC/920_64@RAC003)

# local Oracle RAC database homes [no | yes]

# default: no - shared database homes

# loc_ora_homes = yes

# handling of Oracle RAC database services [no | yes]

# default: no

# db_services = yes

# database owner of objects to be checked

# <owner> | (<owner_list>)

# default: all SAP owners

# check_owner = SAPSR3

# database objects to be excluded from checks

# all_part | non_sap | [<owner>.]<table> | [<owner>.]<index>

# | [<owner>.][<prefix>]*[<suffix>] | <tablespace> | (<object_list>)

# default: no exclusion, example:

# check_exclude = (SDBAH, SAPSR3.SDBAD)

# special database check conditions

# ("<type>:<cond>:<active>:<sever>:[<chkop>]:[<chkval>]:[<unit>]", ...)

# check_cond = (<cond_list>)

# database owner of SDBAH, SDBAD and XDB tables for cleanup

# <owner> | (<owner_list>)

# default: all SAP owners

# cleanup_owner = SAPSR3

# retention period in days for brarchive log files

# default: 30

# cleanup_brarchive_log = 30

# retention period in days for brbackup log files

# default: 30

# cleanup_brbackup_log = 30

# retention period in days for brconnect log files

# default: 30

# cleanup_brconnect_log = 30

# retention period in days for brrestore log files

# default: 30

# cleanup_brrestore_log = 30

# retention period in days for brrecover log files

# default: 30

# cleanup_brrecover_log = 30

# retention period in days for brspace log files

# default: 30

# cleanup_brspace_log = 30

# retention period in days for archive log files saved on disk

# default: 30

# cleanup_disk_archive = 30

# retention period in days for database files backed up on disk

# default: 30

# cleanup_disk_backup = 30

# retention period in days for brspace export dumps and scripts

# default: 30

# cleanup_exp_dump = 30

# retention period in days for Oracle trace and audit files

# default: 30

# cleanup_ora_trace = 30

# retention period in days for records in SDBAH and SDBAD tables

# default: 100

# cleanup_db_log = 100

# retention period in days for records in XDB tables

# default: 100

# cleanup_xdb_log = 100

# retention period in days for database check messages

# default: 100

# cleanup_check_msg = 100

# database owner of objects to adapt next extents

# <owner> | (<owner_list>)

# default: all SAP owners

# next_owner = SAPSR3

# database objects to adapt next extents

# all | all_ind | special | [<owner>.]<table> | [<owner>.]<index>

# | [<owner>.][<prefix>]*[<suffix>] | <tablespace> | (<object_list>)

# default: all abjects of selected owners, example:

# next_table = (SDBAH, SAPSR3.SDBAD)

# database objects to be excluded from adapting next extents

# all_part | [<owner>.]<table> | [<owner>.]<index>

# | [<owner>.][<prefix>]*[<suffix>] | <tablespace> | (<object_list>)

# default: no exclusion, example:

# next_exclude = (SDBAH, SAPSR3.SDBAD)

# database objects to get special next extent size

# allsel:<size>[/<limit>] | [<owner>.]<table>:<size>[/<limit>]

# | [<owner>.]<index>:<size>[/<limit>]

# | [<owner>.][<prefix>]*[<suffix>]:<size>[/<limit>]

# | (<object_size_list>)

# default: according to table category, example:

# next_special = (SDBAH:100K, SAPSR3.SDBAD:1M/200)

# maximum next extent size

# default: 2 GB - 5 * <database_block_size>

# next_max_size = 1G

# maximum number of next extents

# default: 0 - unlimited

# next_limit_count = 300

# database owner of objects to update statistics

# <owner> | (<owner_list>)

# default: all SAP owners

# stats_owner = SAPSR3

# database objects to update statistics

# all | all_ind | all_part | missing | info_cubes | dbstatc_tab

# | dbstatc_mon | dbstatc_mona | [<owner>.]<table> | [<owner>.]<index>

# | [<owner>.][<prefix>]*[<suffix>] | <tablespace> | (<object_list>)

# | harmful | locked | system_stats | oradict_stats | oradict_tab

# default: all abjects of selected owners, example:

# stats_table = (SDBAH, SAPSR3.SDBAD)

# database objects to be excluded from updating statistics

# all_part | info_cubes | [<owner>.]<table> | [<owner>.]<index>

# | [<owner>.][<prefix>]*[<suffix>] | <tablespace> | (<object_list>)

# default: no exclusion, example:

# stats_exclude = (SDBAH, SAPSR3.SDBAD)

# method for updating statistics for tables not in DBSTATC

# E | EH | EI | EX | C | CH | CI | CX | A | AH | AI | AX | E= | C= | =H

# | =I | =X | +H | +I

# default: according to internal rules

# stats_method = E

# sample size for updating statistics for tables not in DBSTATC

# P<percentage_of_rows> | R<thousands_of_rows>

# default: according to internal rules

# stats_sample_size = P10

# number of buckets for updating statistics with histograms

# default: 75

# stats_bucket_count = 75

# threshold for collecting statistics after checking

# <threshold> | (<threshold> [, all_part:<threshold>

# | info_cubes:<threshold> | [<owner>.]<table>:<threshold>

# | [<owner>.][<prefix>]*[<suffix>]:<threshold>

# | <tablespace>:<threshold> | <object_list>])

# default: 50%

# stats_change_threshold = 50

# number of parallel threads for updating statistics

# default: 1

# stats_parallel_degree = 1

# processing time limit in minutes for updating statistics

# default: 0 - no limit

# stats_limit_time = 0

# parameters for calling DBMS_STATS supplied package

# all:R|B|H|G[<buckets>|A|S|R|D[A|I|P|X|D]]:0|<degree>|A|D

# | all_part:R|B[<buckets>|A|S|R|D[A|I|P|X|D]]:0|<degree>|A|D

# | info_cubes:R|B[<buckets>|A|S|R|D[A|I|P|X|D]]:0|<degree>|A|D

# | [<owner>.]<table>:R|B[<buckets>|A|S|R|D[A|I|P|X|D]]:0|<degree>|A|D

# | [<owner>.][<prefix>]*[<suffix>]:R|B[<buckets>|A|S|R|D[A|I|P|X|D]]:0

# |<degree>|A|D | (<object_list>) | NO

# R|B - sampling method:

# 'R' - row sampling, 'B' - block sampling,

# 'H' - histograms by row sampling, 'G' - histograms by block sampling

# [<buckets>|A|S|R|D] - buckets count:

# <buckets> - histogram buckets count, 'A' - auto buckets count,

# 'S' - skew-only, 'R' - repeat, 'D' - default buckets count (75)

# [A|I|P|X|D] - columns with histograms:

# 'A' - all columns, 'I' - indexed columns, 'P' - partition columns,

# 'X' - indexed and partition columns, 'D' - default columns

# 0|<degree>|A|D - parallel degree:

# '0' - default table degree, <degree> - dbms_stats parallel degree,

# 'A' - dbms_stats auto degree, 'D' - default Oracle degree

# default: ALL:R:0

# stats_dbms_stats = ([ALL:R:1,][<owner>.]<table>:R:<degree>,...)

# definition of info cube tables

# default | rsnspace_tab | [<owner>.]<table>

# | [<owner>.][<prefix>]*[<suffix>] | (<object_list>) | null

# default: rsnspace_tab

# stats_info_cubes = (/BIC/D*, /BI0/D*, ...)

# special statistics settings

# (<table>:[<owner>]:<active>:[<method>]:[<sample>], ...)

# stats_special = (<special_list>)

# update cycle in days for dictionary statistics within standard runs

# default: 0 - no update

# stats_dict_cycle = 100

# method for updating Oracle dictionary statistics

# C - compute | E - estimate | A - auto sample size

# default: C

# stats_dict_method = C

# sample size for updating dictionary statistics (stats_dict_method = E)

# <percent> (1-100)

# default: auto sample size

# stats_dict_sample = 10

# parallel degree for updating dictionary statistics

# auto | default | null | <degree> (1-256)

# default: Oracle default

# stats_dict_degree = 4

# update cycle in days for system statistics within standard runs

# default: 0 - no update

# stats_system_cycle = 100

# interval for updating Oracle system statistics

# 0 - NOWORKLOAD, >0 - interval in minutes

# default: 0

# stats_system_interval = 0

# database objects to be excluded from validating structure

# null | all_part | info_cubes | [<owner>.]<table> | [<owner>.]<index>

# | [<owner>.][<prefix>]*[<suffix>] | <tablespace> | (<object_list>)

# default: value of the stats_exclude parameter, example:

# valid_exclude = (SDBAH, SAPSR3.SDBAD)

# recovery type [complete | dbpit | tspit | reset | restore | apply

# | disaster]

# default: complete

# recov_type = complete

# directory for brrecover file copies

# default: $SAPDATA_HOME/sapbackup

# recov_copy_dir = /oracle/SRQ/sapbackup

# time period in days for searching for backups

# 0 - all available backups, >0 - backups from n last days

# default: 30

# recov_interval = 30

# degree of paralelism for applying archive log files

# 0 - use Oracle default parallelism, 1 - serial, >1 - parallel

# default: Oracle default

# recov_degree = 0

# number of lines for scrolling in list menus

# 0 - no scrolling, >0 - scroll n lines

# default: 20

# scroll_lines = 20

# time period in days for displaying profiles and logs

# 0 - all available logs, >0 - logs from n last days

# default: 30

# show_period = 30

# directory for brspace file copies

# default: $SAPDATA_HOME/sapreorg

# space_copy_dir = /oracle/SRQ/sapreorg

# directory for table export dump files

# default: $SAPDATA_HOME/sapreorg

# exp_dump_dir = /oracle/SRQ/sapreorg

# database tables for reorganization

# [<owner>.]<table> | [<owner>.][<prefix>]*[<suffix>]

# | [<owner>.][<prefix>]%[<suffix>] | (<table_list>)

# no default

# reorg_table = (SDBAH, SAPSR3.SDBAD)

# table partitions for reorganization

# [[<owner>.]<table>.]<partition>

# | [[<owner>.]<table>.][<prefix>]%[<suffix>]

# | [[<owner>.]<table>.][<prefix>]*[<suffix>] | (<tabpart_list>)

# no default

# reorg_tabpart = (PART1, PARTTAB1.PART2, SAPSR3.PARTTAB2.PART3)

# database indexes for rebuild

# [<owner>.]<index> | [<owner>.][<prefix>]*[<suffix>]

# | [<owner>.][<prefix>]%[<suffix>] | (<index_list>)

# no default

# rebuild_index = (SDBAH~0, SAPSR3.SDBAD~0)

# index partitions for rebuild

# [[<owner>.]<index>.]<partition>

# | [[<owner>.]<index>.][<prefix>]%[<suffix>]

# | [[<owner>.]<index>.][<prefix>]*[<suffix>] | (<indpart_list>)

# no default

# rebuild_indpart = (PART1, PARTIND1.PART2, SAPSR3.PARTIND2.PART3)

# database tables for export

# [<owner>.]<table> | [<owner>.][<prefix>]*[<suffix>]

# | [<owner>.][<prefix>]%[<suffix>] | (<table_list>)

# no default

# exp_table = (SDBAH, SAPSR3.SDBAD)

# database tables for import

# <table> | (<table_list>)

# no default

# do not specify table owner in the list - use -o|-owner option for this

# imp_table = (SDBAH, SDBAD)

# Oracle system id of ASM instance

# default: +ASM

# asm_ora_sid = <asm_inst> | (<db_inst1>:<asm_inst1>,

# <db_inst2>:<asm_inst2>, <db_inst3>:<asm_inst3>, ...)

# asm_ora_sid = (RAC001:+ASM1, RAC002:+ASM2, RAC003:+ASM3, RAC004:+ASM4)

# asm_ora_sid = +ASM

# Oracle home of ASM instance

# no default

# asm_ora_home = <asm_home> | (<db_inst1>:<asm_home1>,

# <db_inst2>:<asm_home2>, <db_inst3>:<asm_home3>, ...)

# asm_ora_home = (RAC001:/oracle/GRID/11202, RAC002:/oracle/GRID/11202,

# RAC003:/oracle/GRID/11202, RAC004:/oracle/GRID/11202)

# asm_ora_home = /oracle/GRID/11202

# Oracle ASM root directory name

# default: ASM

# asm_root_dir = <asm_root>

# asm_root_dir = ASM

===========================================================================================

 
Regards,

Thiago


SAP Bundle Patch error

$
0
0

Hello Gurus,

 

I am not able to install SBP patches on my Oracle Database.

Kindly suggest me on this...

 

Getting pre-run patch inventory...
Getting pre-run patch inventory...done.

Analyzing installed patches...
Analyzing installed patches...failed.

Cannot verify lists of installed patches.
Refer to log file
  $ORACLE_HOME/cfgtoollogs/mopatch/mopatch-2013_09_12-13-11-26.log
for more information.
rubidium:oraxh1 126>

 

Thanks and Regards,

Prasad

SAP 4.7d Installation

$
0
0

Dear All,

I am installating SAP4.7d version and I am stuck in Data upload point where is error occurs. Detail of machine are :

 

Operating System           :              Linux 4.8 32 Bit (Other Higher Version are not accepted by SAP).

SAP Application               :               4.7X200 Enterprise

Db : oracle 9i

 

 

I had set the enviroment variable like

NLS_LANG

ORA_NLS33

LD_LIBRARY_PATH

LD_ASSUME_KERNEL="2.4.1"

 

please see below sapinst.log file.

 

 

/sapmnt/CYP/exe/R3load: START OF LOG: 20130911160019

 

/sapmnt/CYP/exe/R3load: sccsid @(#) $Id: //bas/640_REL/src/R3ld/R3lo

ad/R3ldmain.c#6 $ SAP

/sapmnt/CYP/exe/R3load: version R6.40/V1.4

/sapmnt/CYP/exe/R3load -ctf I /sapdata/sapdata/cd8/DATA/SAPFUNCT.STR

/tmp/sapinst_instdir/R3E47X2/SYSTEM/ABAP/ORA/NUC/DB/DDLORA.TPL /tmp

/sapinst_instdir/R3E47X2/SYSTEM/ABAP/ORA/NUC/DB/SAPFUNCT.TSK ORA -l

/tmp/sapinst_instdir/R3E47X2/SYSTEM/ABAP/ORA/NUC/DB/SAPFUNCT.log

 

 

/sapmnt/CYP/exe/R3load: job completed

/sapmnt/CYP/exe/R3load: END OF LOG: 20130911160019

 

/sapmnt/CYP/exe/R3load: START OF LOG: 20130911161327

 

/sapmnt/CYP/exe/R3load: sccsid @(#) $Id: //bas/640_REL/src/R3ld/R3lo

ad/R3ldmain.c#6 $ SAP

/sapmnt/CYP/exe/R3load: version R6.40/V1.4

/sapmnt/CYP/exe/R3load -dbcodepage 1100 -i /tmp/sapinst_instdir/R3E4

7X2/SYSTEM/ABAP/ORA/NUC/DB/SAPFUNCT.cmd -l /tmp/sapinst_instdir/R3E4

7X2/SYSTEM/ABAP/ORA/NUC/DB/SAPFUNCT.log -stop_on_error

 

DbSl Trace: OCI-call 'OCISessionBegin' failed: rc = 1004

 

DbSl Trace: CONNECT failed with sql error '1004'

 

DbSl Trace: NLS_LANG not set appropriately (DB installation requires

AMERICAN_AMERICA.WE8ISO8859P1) ==> connection refused

 

(DB) ERROR: db_connect rc = 256

DbSl Trace: Already connected to CYP

 

(DB) ERROR: DbSlErrorMsg rc = 29

 

/sapmnt/CYP/exe/R3load: job finished with 1 error(s)

/sapmnt/CYP/exe/R3load: END OF LOG: 20130911161327

 

/sapmnt/CYP/exe/R3load: START OF LOG: 20130911172735

 

/sapmnt/CYP/exe/R3load: sccsid @(#) $Id: //bas/640_REL/src/R3ld/R3lo

ad/R3ldmain.c#6 $ SAP

/sapmnt/CYP/exe/R3load: version R6.40/V1.4

/sapmnt/CYP/exe/R3load -dbcodepage 1100 -i /tmp/sapinst_instdir/R3E4

7X2/SYSTEM/ABAP/ORA/NUC/DB/SAPFUNCT.cmd -l /tmp/sapinst_instdir/R3E4

7X2/SYSTEM/ABAP/ORA/NUC/DB/SAPFUNCT.log -stop_on_error

 

 

 

 

regards,

 

Syed Tayab Shah

Changing Database Hostname for SAP systems

$
0
0

Hi,

 

For testing scenario for few upgrades, we have copied one of our actual development server and restored to another testing server.

Before restoring i have changed the ipaddress and hostname on testing server and the restored.

After Restoring i have changed all the sap profile parameters in the path (/usr/sap/SID/SYS/profile) and in the files "tnsnames.ora and listener.ora" and then started the SAP services, system started and connecting fine to SAP through saplogn pad.

 

But after logging to SAP system when i checked in System - Status - Database Data it not showing the right hostname (means its not showing the testing server hostname) its shows the hostname of development server from where i have copied the system.

 

Please let me know, How to change the database hostname in the system status tab at sap level.

 

Please suggest, waiting for your reply.

 

Best Regards

Vamsi

 

 

 

Re-transmission tlisrv requests in between Application and Database

$
0
0

Dear Experts.

 

Our SAP production system is having 4 application serves those are connecting to database instance in linux ha. There were some network issue last few days. Then we analyse the network. While inspecting the network using wire-shark we could see in between database and application servers re-transmitting packets according to the attached wire-shark file.

 

Server Vlan is 10.10.11.0/24 [vlan 11]

Access Vlan   is 10.10.10.0/24 [vlan 10]

 

Users are connect though the vlan10.

 

App servers - 21, 22 , 23 ,24

DB server - 10.10.10.13

 

 

Please anyone can expalin me the exact issue in our system.

DataGuard How-To

Viewing all 1304 articles
Browse latest View live


<script src="https://jsc.adskeeper.com/r/s/rssing.com.1596347.js" async> </script>