We are using Oracle 11g standard, without any options (diag/tunning/perf).I can identify one session with the SQL bellow which having a high consumption (1.4 GB) of memory on the past hours. (pga and uga)There is someway to get details about this consumption?Whether it is the use of temporary tables or something else? With x as (select s.osuser osuser, s.username, s.status, se.sid, s.serial# serial, n.name, round(max(se.value)/1024/1024, 2) maxmemmb, max(se.value) as maxmemfrom v$sesstat se, v$statname n, v$session swhere n.statistic# = se.statistic#and n.name in ('session pga memory','session pga memory max', 'session uga memory','session uga memory max')and s.sid = se.sidgroup by s.osuser, s.username, s.status, se.sid, s.serial#, n.nameorder by maxmem desc)select. from x where rownum. After some research I've discovered the view V$PROCESSMEMORYDETAIL and how use it.By default it's always empty and need some commands to 'enable' it (oradebug or alter session).References I found:.This quote from Oracle Article says everything: How to use V$PROCESSMEMORY and V$PROCESSMEMORYDETAIL to identifywhere the memory is growing.In Oracle 10.2 and higher exist 2 new views that can be used to findwhere the memory continue to grow. This views can be used instead ofheap dump to find where the memory is growing:- V$PROCESSMEMORY:This view displays dynamic PGA memory usage by named component categories for each Oracle process.
I think maybe you need a tutorial in how AIX manages memory. Looking at high usage or low free is actually a somewhat useless exercise in the vast majority of cases.AIX will use all of the memory available to it, keeping file pages in cache until you hit a certain threshold (minfree), when it will invoke the lrud (least-recently-used) daemon to recover pages.So, free pages are designed to get pretty low, and no conclusions as to the health of the system can be made just because this value is low.A more useful thing to monitor is paging. If you're using any paging space, you should figure out why; this could be an indication that lrud isn't getting the job done.Here's my script for that, set at 20%:#!/bin/ksh# To report on page space usage.#email@removedPctHighThisHost=$(hostname)PgStat=/tmp/pgStat.outlsps -s tail +2 $PgStatwhile read Total PctdopgMb=$(echo $Total cut -d 'Mb' -f1)pgPct=$(echo $Pct cut -d% -f1)((MbUsed = pgMb. Mass effect 2 weapons guide. pgPct / 100))((MbFree = pgMb - MbUsed))if ((PctHigh. Hi Mahijeeth, so i am looking for a script that will send alerts when ever MEMORY usageexceeds 90%.Stop looking for that.
The max limit on memory that you have depends on how much physical memory that your server has, whether it is 32 bit or 64 bit and what OS you use. For example with 32 bit Windows the limit might be 2GB of RAM, depending on what version of Windows server you are using and whether or no you have a certain flag set that would allow you to use up to 3GB or RAM. If you are using a newer 64 bit OS, then you would have no practical limit to the size of the SGA. It would only depend on the amount of physical memory. So you should provide the following: 32 bit/64 bit, operating system and version, hardware type including chip set. I tried the following: SQL create pfile from spfile;Datei erstellt.
How to find out what is the memory size above which it doesn't help any db performance.Example:-If we setup Oracle database standard edition two on a Windows server.If we allocate initially around 10GB, since we are not sure how much would be optimal memory allocation to database.But how to find if the 10GB was over allocation or under allocation?Memory under allocation might effect the database performance. Does memory over allocation to the Oracle database also effect DB performance?Thanks in advance.
You can install and configure Enterprise Manager (emca) and you get a web interface. It's pretty good at handling most of what you need. There is an advisor that can help you size the instance, and it will actually say something like 'Your SGA is 10GB. If you increase SGA to 16GB.' Similar to the SQL profiler and analyzer, where it suggests how restructuring a query or building an index can reduce the query run time by X%. What makes this a good tool is it will tell you what it thinks based on collected statistics so there is no guessing, not that there is anything wrong with what Adrian suggested. It's a very different animal.
Oversizing Oracle makes it and 'inefficient' use of resources, but it won't bog down your database in a single instance. When you have a clustered database, that's a different story; oversizing SGA in that case can be bad, in certain situations.Generally, you want to size Oracle just over what it needs to get maximum memory usage (no wasted RAM resources) while keeping Efficiency Ratios as close to 100% as is possible. Sometimes, the answer is not to increase SGA overall, but to tune buffer and sort caches or to increase PGA.Oracle Corporation has about 5 different courses on tuning. Yeah, it's that complicated;). SPR1 wrote:Sal8273 wrote: If you have too much, there is no additional benefit; Oracle will use what it needs, not what it can get its hands on.That is what I wanted to know. If over allocation of RAM effects any DB performance.The reason why I'm asking the above question.
Our windows guest virtual machines are setup on VMWare. And as per experts they say over allocating vCPU or RAM to any virtual machine might effect the performance. Since this DB is running on a VMware host there are other things that can cause some performance issues too. Be very careful about how many vCPU's are designated, and the total RAM for the VM. For absolute best performance, turn off hyper-threading on the host box, as a hyper-threaded CPU is really only about 25% of a 'real' core, and try not to allocate memory (and CPU's) that force access over NUMA boundaries. I forget the actual statistics but, a memory access across a NUMA boundary can take 50% longer than a 'local' memory access.The reasons behind this are simple, modern server motherboards have two or more processor sockets, each socket has memory that is directly attached to it.
This memory is 'local' to that CPU and all the cores that it contains and access to this memory is fater than any other memory in the machine. All CPU's have access to all the memory in a machine, but when it is not local the CPU must request the memory in a different manner, and that takes more time. By planning the memory in your host in advance, and the CPU's that will be addressing that memory so that you aren't crossing NUMA (or accessing memory attached to a different CPU) bounds, you get the fastest speeds possible. VMware attempts to minimize this for us, but isn't always successful.So while Oracle might not be 'over provisioned' by adding to much memory, it is possible to slow your memory access by 50% by having the SGA or the host itself cross NUMA bounds. This topic is covered in some detail by folks that are far more VMware expert than myself in this presentation from VMWorld:VMworld 2017 VIRT1309BU - Monster VMs (Database Virtualization) with VMware.
Game of thrones texture pack. You can find that presentation on YouTube.
The following scripts returns the cpu usage for active sessions. The result shows the cpu usage in seconds.What I need is the same report with cpu usage in percentage. What is the best way to do this?
- Show CPU Usage for Active Sessions-SET PAUSE ONSET PAUSE 'Press Return to Continue'SET PAGESIZE 60SET LINESIZE 300COLUMN username FORMAT A30COLUMN sid FORMAT 999,999,999COLUMN serial# FORMAT 999,999,999COLUMN 'cpu usage (seconds)' FORMAT 999,999,999.0000SELECTs.username,t.sid,s.serial#,SUM(VALUE/100) as 'cpu usage (seconds)'FROMv$session s,v$sesstat t,v$statname nWHEREt.STATISTIC# = n.STATISTIC#ANDNAME like '%CPU used by this session%'ANDt.SID = s.SIDANDs.status='ACTIVE'ANDs.username is not nullGROUP BY username,t.sid,s.serial#/. Long story short: you won't be able to do it with a single query, you will need to write a PL/SQL to gather useful data in order to obtain useful information.Oracle has 'accumulated time' statistics, this means that the engine keeps a continous track of use. You will have to define a start time and an end time for analysis.You can query 'DB CPU' from V$SYSTIMEMODEL select value into tdbcpuifrom sys.V$SYSTIMEMODELwhere statname = 'DB CPU'; /. start time./.select value into tdbcpuffrom sys.V$SYSTIMEMODELwhere statname = 'DB CPU'; /. end time./CPU statistics will be affected if you have just #1 CPU or #8 CPUs. So, you will have to determine how many CPUs is your engine using.You can query 'cpucount' from V$PARAMETER to obtain this value. Select value into tcpusfrom sys.v$parameterwhere name='cpucount';Then, it's quite simple:Maximum total time will be seconds.
Oracle Memory Usage Query
number of CPUs, so if you have just #1 CPU then maximum total time would be '60', but if you have #2 CPUs then maximun total time would be '120'. #3 CPUs will be '180'. Etc.So you take start time and end time of the analyzed period using sysdate: tstart:= sysdate;tend:= sysdate;And now you compute the following: secondselapsed:= (tend - tstart).24.60.60;totaltime:= secondselapsed. tcpus;usedcpu:= tdbcpuf - tdbcpui;secscpu:= secondselapsed/1000000;avgcpu:= (secscpu/totaltime).100;And that's it, 'avgcpu' is the value you are looking for.
Comments are closed.
|
AuthorWrite something about yourself. No need to be fancy, just an overview. ArchivesCategories |