Quantcast
Channel: Intel® Software - Intel® VTune™ Profiler (Intel® VTune™ Amplifier)
Viewing all 1574 articles
Browse latest View live

Profiler Drivers missing

$
0
0

Brand new to vtune, on a Centos Linux machine.  "uname -r" gives "3.10.0-327.el7.x86_64".

When I installed vtune, it said

The install program cannot detect the kernel source directory for OS kernel
version 3.10.0-327.el7.x86_64. If kernel sources are already installed to custom
directory, set up this parameter in Advanced Options -> Driver Build Options
dialog.
To install kernel headers, execute one of the following commands specific to
your operating system:
- CentOS / Red Hat Enterprise Linux
   - On a system with the default kernel, install the kernel-devel package:
     sudo yum install kernel-devel-3.10.0-327.el7.x86_64
   - On a system with the PAE kernel, install the kernel-PAE package:
     sudo yum install kernel-PAE-devel-3.10.0-327.el7.x86_64
 

When I try to run that final "yum" line, I get:

No package kernel-PAE-devel-3.10.0-327.el7.x86_64 available.

When I run either vtune-self-checker.sh or try to set up a Microarchitecture Exploration inside the vtune GUI,
they tell me  "vtune: Error: This analysis type requires either an access to system-wide monitoring in the Linux perf subsystem or installation of the VTune Profiler drivers (see the "Sampling Drivers" help topic for further details). "

How do I install the VTune Profiler drivers?


Intel VTune Profiler Comparison with other Profiler in Market

$
0
0

Dear Friends,

I have never used VTune profiler  for instrumenting C++.

Can someone tell me why Intel VTune is better then other profiliers like valgrid, perf, gprof ? In terms of slowness & time takes for profiling hows intel compares to others ? 

I want to focus on measuring application cache misses , cache performance.

Also i dont want to install Intel VTune GUI on development server, so can i load the output file of VTune created via cmd line in my local machine using VTune GUI ?

 

Thanks

Himanshu

Export data from timeline

$
0
0

I need the data (Memory bandwidth, CPU analysis, etc.) in time series exported into a readable format such as .csv file.

How can I do that?

Intel VTune Profiler Installation

$
0
0

Hi,

 

I would like to install Intel VTune profiler on my Centos 7 machine.....AFAIK, the latest version does not work on CentOS 6.. But Centos 7 is fine, isn't it?

Please  help me regarding the extracting the installation package to a writeable directory with the following command:

tar -xzf vtune_profiler_.<version>.tar.gz

I am struggling with the version part. Which version should I put? I mean what should be the complete command?

 

Thanks in advance .-)

 

BR
Bobby ! 

My scenario and Intel VTune Profiler.

$
0
0

I am working on Ceph (https://en.wikipedia.org/wiki/Ceph_(software). It is an open-source software storage platform.

- Through git I have cloned Ceph in my home folder.
- Built its dependencies
- And compiled Ceph in debug mode.

In order to work as a developer on Ceph, a Ceph utility, vstart.sh, allows you to deploy fake local cluster for development purpose. The fake local cluster is deployed by a script vstart.sh.

As it deploys a cluster, as a developer you can develop READ and WRITE tests to test the deployed cluster. Hence client programs (client test codes). These READ and WRITE tests are compiled using GCC in build folder. Once compiled you get executable. I have some of these test codes in C and some in C++.  

Here I would like to bring Intel VTune Profiler in my workflow. I would like to do profiling of Ceph through my READ and WRITE test codes. Profiling of call functions, loops, etc.

And I am using a single virtual machine (Linux CentOS 7). Ceph is written mostly in C++.

My questions:

- Does Intel VTune Profiler fits my scenario?

- If yes, given my scenario, where exactly Intel VTune Profiler should be installed?

- The executable of READ and WRITE test codes are in build folder i.e. /home/user/ceph/build....How can I launch VTune profiler in this case?

- Does Intel VTune Profler supports C executable?

 

Looking forward to help.

Vtune Data microarchitecture analysis metrics vs varying CPU frequency

$
0
0

Hi,
I ran a code with different frequencies and collected vtune data (on 8280 processor, rhel 7) using microarchitecture analysis. I understand that vtune(v2020) can be used to identify the portions of codes which are underutilizing the given hardware resources on a processor. I did this experiment in order to see how the application responds on variation of a particular component of hardware or , which hardware component limits the scaling of this application (example - memory frequency/cpu frequency etc.)? 

So, i gathered the data with various frequencies (acpi-cpufreq) and followed the metrics breakdown trail of the numbers shown in red color on vtune GUI as -
  1: Back End Bound --> 2: (Memory Bound, Core Bound) --> 3: DRAM Bound --> 4: (Memory Bandwidth, Memory Latency) --> 5: Local DRAM.

I noticed that - 
a) Back-End Bound: = Memory Bound + Core Bound , example (62% of clock ticks = 42 % + 20 %)
b) Memory Bound ~= L1 Bound + L2 Bound + L3 Bound + DRam Bound + Store Bound(42 ~= 8% + 3% + 2% + 20% + 6%)
c) DRam Bound  <  Memory Bandwidth Bound + Memory Latency (20 < 28 + 10)
d) Memory Latency << Local DRAM + Remote DRAM + Remote Cache (10 << 97 + 2 + 1)

Q1:  What could be the reason behing the subcategory total exceeding the category value for c & d ?
      for c and d i was expecting something like DRam Bound  = Memory Bandwidth Bound + Memory Latency.

Q2: On increasing the CPU frequency i got following from vtune for DRAM Memory Bandwidth
    1GHz    - 28   % of Clockticks
    1.4GHz - 37 %
    1.8GHz -  42 %
    2GHz    -  42.5 %
    2.6GHz -  42.8 %
    2.7GHz -  42.9 %
    2.7+boost enabled - 41.7 %
- The number of CPU stalls (for DRAM) are not increasing when the frequency exceeds 1.8 GHz. Now i am looking for the reason behind this behaviour.
I expected that with higher frequency, stalls would grow as more CPU cycles/ pipeline slots will be wasted due to data unavailability.
I am focusing on metrics highlighted in red. As cache bound clock cycles were almost constant (.2/.4% increase in each of L1,L2,L3,Store) for all the frequencies mentioned above, could i say that larger cache will not help here? - contrary to what is mentioned here

 

Q3: I noted that on varying the frequencies, the Vector Capacity Usage (FPU) stays constant at around 70%. Which from the explanation here means that 70% of my floating point computations executed on VPU units (rest were scalar).
also, here  i can see that there are different types of execution units which can process 256 but data. Is it possible to see the break up of the Floating point applications like - how many used 256-FP MUL, how many used 256 FP Add etc ?

 

Q4: Are 256 FP Add/256-FP MUL and FMA  are different ? If yes then on which port the front end unit dispatches the uOPs for FMA? as i can't see the FMA unit in the block diagram 
 
 please let me now if some more information is required from my end or any of the questions mentioned above are vague / unclear.

2 problems with Threading analysis

$
0
0

Hello!

I am trying to profile a C++ application with OpenMP using Intel Vtune Profiler and I have 2 troubles with Threading analysis.
1) If I run the application for a short time (there are different modes and options in this application, so I can vary the time of execution), I reach the limit of 1000 MB collected data in a few minutes.
2) Even if I run application for a short time (data limit isn't reached), after data collection, finalization of the results freezes (see "freeze" picture).

Tell me, please what should I do to solve these problems?

Many thanks! :)

P.S. For example, Hotspot analysis runs relatively correctly. I have Windows 10 and NetBeans with MinGW compiler.

AttachmentSize
Downloadimage/pngfreeze.png17.94 KB

Running finalize on different host from profiling

$
0
0

Hi.  I need to run some code to be profiled on a special set of machines, but because they are a limited resource, I would like to do the finalization on a different machine.  All of the machines have essentially the same set up (same OS, same file systems mounted, same libraries).  However, when I try to finalize on a different host vtune detects that the host is different and doesn't automatically find the libraries and debug information, even though all of the information is in exactly the same place.  The application is big and complicated, with hundreds of libraries and code spread over a large directory tree; it doesn't seem like I can just specify a top-level directory and have the tool find everything.  Is there a way I can tell vtune to behave the same as if I was running finalize on the same host?

Thanks,

Ben


Speeding up finalize

$
0
0

Hi.  I'm wondering if there are any tricks to speeding up finalization.  My jobs usually run for 4-6 hours, of which maybe one or two hours has profiling enabled.  However, finalization can take 2-4 days.  I've tried limiting the sampling rate and the total data stored, but even then it still usually takes 10x longer to finalize than to profile (I have this sense that this didn't use to be the case when I was last using vtune a few years ago; perhaps I was using an older version?).  I'm currently using VTune  2019.  If it will make a big difference I could try to get it upgraded, but the tools are managed centrally so that's not always easy.  I'm hoping there are some things I can do to bring the finalization time down without losing too much in the way of profiling coverage.

Thanks,

Ben

 

Interpretation of profiling results

$
0
0

Hello!

I am trying to profile a C++ application with OpenMP using Intel Vtune Profiler and I've run Hotspots, Threading user-mode and hardware-based analyzes (see "hotspots", "user" and "hardware" pictures +"threads" picture from hardware analysis).

I have several questions about  results of these analyzes and I ask to help me.

1) What do these results generally mean? If I'm not mistaken, Hotspots analysis revealed that most of time was spent usefully and then Threading analyzes shows the opposite.
2) What is Semaphore object in Threading user-mode analysis?
3) Why one thread has such a lot of load? ("threads" picture) Most of work is done in parallel region.

What should I do to increase parallelism of this application?

I've read the documentation: https://software.intel.com/en-us/vtune-help-windows-targets but still can't understand what's happening in my case.

Algorithm of application is simple:
#pragma omp parallel num_threads(8){
    if(myID==0){
    <master thread job>
    }
    #pragma omp for schedule(static)
        <parallel cycle>
    if(myID==0){
    <master thread job>
    }
}

Many thanks! :)

P.S. I have Windows 10 and NetBeans with MinGW compiler

cache hit/miss rate calculation - cascadelake platform

$
0
0

Hi,
I ran microarchitecture analysis on 8280 processor and i am looking for usage metrics related to cache utilization like - L1,L2 and L3 Hit/Miss rate (total L1 miss/total L1 requests  ...., total L3 misses / total L3 requests) for the overall application. I was unable to see these in the vtune GUI summary page and from this article it seems i may have to figure it out by using a "custom profile".
From the explanation here  (for sandybridge) , seems we have following for calculating "cache hit/miss rates" for demand requests- 

Demand Data L1 Miss Rate => cannot calculate.

Demand Data L2 Miss Rate =>
(sum of all types of L2 demand data misses) / (sum of L2 demanded data requests) =>
(MEM_LOAD_UOPS_RETIRED.LLC_HIT_PS + MEM_LOAD_UOPS_LLC_HIT_RETIRED.XSNP_HIT_PS + MEM_LOAD_UOPS_LLC_HIT_RETIRED.XSNP_HITM_PS + MEM_LOAD_UOPS_MISC_RETIRED.LLC_MISS_PS) / (L2_RQSTS.ALL_DEMAND_DATA_RD)

Demand Data L3 Miss Rate =>
L3 demand data misses / (sum of all types of demand data L3 requests) =>
MEM_LOAD_UOPS_MISC_RETIRED.LLC_MISS_PS / (MEM_LOAD_UOPS_RETIRED.LLC_HIT_PS + MEM_LOAD_UOPS_LLC_HIT_RETIRED.XSNP_HIT_PS + MEM_LOAD_UOPS_LLC_HIT_RETIRED.XSNP_HITM_PS + MEM_LOAD_UOPS_MISC_RETIRED.LLC_MISS_PS)

Q1: As this post was for sandy bridge and i am using cascadelake, so wanted to ask if there is any change in the formula (mentioned above) for calculating the same for latest platform  and are there some events which have changed/added in the latest platform which could help to calculate the  -
- L1 Demand Data Hit/Miss rate 
- L1,L2,L3  prefetch and instruction Hit/ Miss rate
also, in this post here , the events mentioned to get the cache hit rates does not include ones mentioned above (example  MEM_LOAD_UOPS_RETIRED.LLC_HIT_PS) 

amplxe-cl -collect-with runsa -knob event-config=CPU_CLK_UNHALTED.REF_TSC,MEM_LOAD_UOPS_RETIRED.L1_HIT_PS,MEM_LOAD_UOPS_RETIRED.L1_MISS_PS,MEM_LOAD_UOPS_RETIRED.L3_HIT_PS,MEM_LOAD_UOPS_RETIRED.L3_MISS_PS,MEM_UOPS_RETIRED.ALL_LOADS_PS,MEM_UOPS_RETIRED.ALL_STORES_PS,MEM_LOAD_UOPS_RETIRED.L2_HIT_PS:sa=100003,MEM_LOAD_UOPS_RETIRED.L2_MISS_PS -knob collectMemBandwidth=true -knob dram-bandwidth-limits=true -knob collectMemObjects=true

 

Q2:  what will be the formula to calculate cache hit/miss rates with aforementioned events ?

Q3: is it possible to get few of these metrics (like MEM_LOAD_UOPS_MISC_RETIRED.LLC_MISS_PS,... ) from the uarch analysis 's raw data which i already ran via -

mpirun -np 56 -ppn 56 amplxe-cl -collect uarch-exploration -data-limit 0 -result-dir result_uarchexpl -- $PWD/app.exe

So, the following will the correct way to run the custom analysis via command line ? -

mpirun -np 56 -ppn 56 amplxe-cl -collect-with runsa -data-limit 0 -result-dir result_cacheexpl -knob event-config=MEM_LOAD_UOPS_RETIRED.LLC_HIT_PS,MEM_LOAD_UOPS_LLC_HIT_RETIRED.XSNP_HIT_PS,MEM_LOAD_UOPS_LLC_HIT_RETIRED.XSNP_HITM_PS,L2_RQSTS.ALL_DEMAND_DATA_RD,MEM_LOAD_UOPS_MISC_RETIRED.LLC_MISS_PS,CPU_CLK_UNHALTED.REF_TSC,MEM_LOAD_UOPS_RETIRED.L1_HIT_PS,MEM_LOAD_UOPS_RETIRED.L1_MISS_PS,MEM_LOAD_UOPS_RETIRED.L3_HIT_PS,MEM_LOAD_UOPS_RETIRED.L3_MISS_PS,MEM_UOPS_RETIRED.ALL_LOADS_PS,MEM_UOPS_RETIRED.ALL_STORES_PS,MEM_LOAD_UOPS_RETIRED.L2_HIT_PS:sa=100003,MEM_LOAD_UOPS_RETIRED.L2_MISS_PS  -- $PWD/app.exe

 (please let me know if i need to use more/different events for cache hit calculations)

Q4: I noted that to calculate the cache miss rates, i need to get/view data as "Hardware Event Counts", not as "Hardware Event Sample Counts".https://software.intel.com/en-us/forums/vtune/topic/280087 How do i ensure this via vtune command line? as I  generate summary via - 

vtune -report summary -report-knob show-issues=false -r <my_result_dir>.

Let me know if i need to use a different command line to generate results/event values for the custom analysis type.

 

Intel VTune Profiler for time spent per function call

$
0
0

Hi amazing support :-)

 

I have a question. Does Intel VTune profiler gives an option to see the time per call....I mean time per function call... 'time in sec' spent in each function. Is there any option that can be used in the command window, that would help me know the time, in seconds, spent in each function in my C ++ application?

 

Thanks in advance !

Intel VTune Profiler on Amazon bare metal instances

$
0
0

Hi,

Can we use Intel VTune Profiler on  Amazon bare metal instances?

 

Ubuntu 20.04 python3.5/python3.8 failure

$
0
0

I have installed Intel Parallel XE 2020 on the ubuntu 20.04.

I am trying to run the following command 

vtune -run-pass-thru=--no-altstack -collect hotspots -- /usr/bin/python3 test.py which calls python3.8(system default). The vtune immediatly stops and the following crash is generated by the system.

(see pic attached)

when I run it with vtune -run-pass-thru=--no-altstack -collect hotspots -- /usr/bin/python3.5 test.py it runs fine. However, the soft I want to profile is compiled againts python3.8, so I can not profile it now.

How can I solve this problem without recompiling the software with python3.5?

Cheers.

test.py:

import time

time.sleep(60)

System:

cmake version 3.16.3

gcc (Ubuntu 9.3.0-10ubuntu2) 9.3.0

Ubuntu 20.04 LTS.

vtune_profiler_2020.1.0.607630/

AttachmentSize
Downloadimage/pngvtune_crash.png94.38 KB

System Profiling of an AWS host with app running in Docker - Outside any known module

$
0
0

Hi,

The system I'm currently profiling is a Linux AWS instance, it's a Scala app (so running on JVM) running in a docker container.  When I profile using a command line like:

 

vtune -collect hotspots -knob sampling-mode=hw -knob enable-stack-collection=true -finalization-mode=full

The majority of the recorded time is listed as 'Outside any known module', and covers the code that I'm mainly interested in profiling.  I had initially suspected that it was as a result of the code being built without debug symbols, however I built a new version with both Java and Scala compilers set to record full debug symbols, and this didn't improve things.

Can you advise on what I'm missing that could enable me to get full stack info via VTune?  I could remove the application from the docker container if that would help, but this is slightly more complicated than it sounds in a prod-like environment and so I'd prefer to avoid it at this stage unless I know it's got a good chance of success.

Many thanks!

Dominic


Profiler Drivers missing

$
0
0

Brand new to vtune, on a Centos Linux machine.  "uname -r" gives "3.10.0-327.el7.x86_64".

When I installed vtune, it said

The install program cannot detect the kernel source directory for OS kernel
version 3.10.0-327.el7.x86_64. If kernel sources are already installed to custom
directory, set up this parameter in Advanced Options -> Driver Build Options
dialog.
To install kernel headers, execute one of the following commands specific to
your operating system:
- CentOS / Red Hat Enterprise Linux
   - On a system with the default kernel, install the kernel-devel package:
     sudo yum install kernel-devel-3.10.0-327.el7.x86_64
   - On a system with the PAE kernel, install the kernel-PAE package:
     sudo yum install kernel-PAE-devel-3.10.0-327.el7.x86_64
 

When I try to run that final "yum" line, I get:

No package kernel-PAE-devel-3.10.0-327.el7.x86_64 available.

When I run either vtune-self-checker.sh or try to set up a Microarchitecture Exploration inside the vtune GUI,
they tell me  "vtune: Error: This analysis type requires either an access to system-wide monitoring in the Linux perf subsystem or installation of the VTune Profiler drivers (see the "Sampling Drivers" help topic for further details). "

How do I install the VTune Profiler drivers?

Intel VTune Profiler Comparison with other Profiler in Market

$
0
0

Dear Friends,

I have never used VTune profiler  for instrumenting C++.

Can someone tell me why Intel VTune is better then other profiliers like valgrid, perf, gprof ? In terms of slowness & time takes for profiling hows intel compares to others ? 

I want to focus on measuring application cache misses , cache performance.

Also i dont want to install Intel VTune GUI on development server, so can i load the output file of VTune created via cmd line in my local machine using VTune GUI ?

 

Thanks

Himanshu

Intel VTune Profiler for time spent per function call

$
0
0

Hi amazing support :-)

 

I have a question. Does Intel VTune profiler gives an option to see the time per call....I mean time per function call... 'time in sec' spent in each function. Is there any option that can be used in the command window, that would help me know the time, in seconds, spent in each function in my C ++ application?

 

Thanks in advance !

Intel VTune Profiler on Amazon bare metal instances

$
0
0

Hi,

Can we use Intel VTune Profiler on  Amazon bare metal instances?

 

Ubuntu 20.04 python3.5/python3.8 failure

$
0
0

I have installed Intel Parallel XE 2020 on the ubuntu 20.04.

I am trying to run the following command 

vtune -run-pass-thru=--no-altstack -collect hotspots -- /usr/bin/python3 test.py which calls python3.8(system default). The vtune immediatly stops and the following crash is generated by the system.

(see pic attached)

when I run it with vtune -run-pass-thru=--no-altstack -collect hotspots -- /usr/bin/python3.5 test.py it runs fine. However, the soft I want to profile is compiled againts python3.8, so I can not profile it now.

How can I solve this problem without recompiling the software with python3.5?

Cheers.

test.py:

import time

time.sleep(60)

System:

cmake version 3.16.3

gcc (Ubuntu 9.3.0-10ubuntu2) 9.3.0

Ubuntu 20.04 LTS.

vtune_profiler_2020.1.0.607630/

AttachmentSize
Downloadimage/pngvtune_crash.png94.38 KB
Viewing all 1574 articles
Browse latest View live


<script src="https://jsc.adskeeper.com/r/s/rssing.com.1596347.js" async> </script>