vioperf - confusing results

Hi - I've got 6xDL380p up to the spec that outlined in the HP DL380p Hardware Guide Document thats on the myvertica website. I've configured the database drive using RAID 10, but am getting figures around the 1300 MB/s mark for the Write test of vioperf (we'd previously had a test DL380p unit getting around 1600 MB/s for the same configuration). As an experiment, I configured the data drive using RAID 6 (ADG) and ran vioperf again on the data drive - I'm getting figures around the 1800MB/s mark vs 1300 for the RAID 10 configuration - I thought that RAID10 was supposed to be faster than RAID6! - has anyone come across similar confusing results from vioperf tool? (I'm running this under RHEL 6.4 btw - the processor spec for the DL380p boxes is 2xIntel Xeon 2.8GHz with 10 core each [so 20 in total])


  • Options
    I'm not familiar with ADG but RAID10 theoretically gets much better write performance than RAID6.  I like to use the WolframAlpha raid calculator (google it) for these types of sanity checks.
  • Options
    Same here.  I tested with vioperf but the results were horrible compared to vioprobe (another Vertica tool).  I also ran fio and the results were more in line with vioprobe.  Does anyone know if vioperf reports incorrect results or not pushing the IO correctly?

  • Options

    I am also finding vioperf not very reliable because documentation is not clearly explaining what result output to use and results are lower than what IO channel can actually deliver. I am seeing the older vioprobe more reliable. 


    Perhaps someone from vertica can explain whether the newer tool has some considerations baked in that I am not seeing. 


    Example : 

    existing prod host : vioprobe : 

    -------- Write ------     ------- Rewrite -----     -------- Read -------

    Elapsed  MB/sec  %CPU     Elapsed  MB/sec  %CPU     Elapsed  MB/sec  %CPU

      270.8    974     99        1001.2    263     51        475.5    555     40      


    vioperf : 

    Write :  MB/s                | 255 

    ReWrite:  MB/s            | 460 + 460 

    Read : MB/s                | 120

    SkipRead :                   | 4900


    How can ReWrite ( read + write in parallel) perform better than read or write alone ? There could be a case where same data is read and written and is actually cache serviced in which case results would not be of real-world use, but i am hoping there is something different. 


    Vioprobe shows more realistic results . The server in case is a 23 drive RAID10 10K 300 GB SAS, 120MB /sec shown by vioperf is too low and not realistic 


    Another example:  ( benchmark server) 



    -------- Write ------     ------- Rewrite -----     -------- Read -------

    Elapsed  MB/sec  %CPU     Elapsed  MB/sec  %CPU     Elapsed  MB/sec  %CPU

      435.0    528     57        957.8    240     29        461.2    498     25 



    Write 460 MB/sec,

    ReWrite 245+245,

    Read 500 MB/sec

    SkipRead 64500 seeks/sec

    Here, these results are more aligned  between vioperf and vioprobe. This is a system tuned for IOPS ( SSDs) , hence the high SkipRead. 


    It would be useful to specify in the documentation ( and the vioperf initial message) whether Read / Write / Rewrite should match the 40 MB/ core requirement or all three of them. 



Leave a Comment

BoldItalicStrikethroughOrdered listUnordered list
Align leftAlign centerAlign rightToggle HTML viewToggle full pageToggle lights
Drop image/file