Page 3 of 3 FirstFirst 123
Results 61 to 64 of 64

Thread: iz: New fast lossless RGB photo compression

  1. #61
    Member cfeck's Avatar
    Join Date
    Jan 2012
    Location
    Germany
    Posts
    50
    Thanks
    0
    Thanked 17 Times in 9 Posts
    Quote Originally Posted by m^2 View Post
    Mhm...thanks. I expected Huffman to be much more expensive....just to be sure, you measure pure CPU time, right?
    Yep, what I wrote was not clear, sorry. Total time = user CPU time, 60 cycles per pixel. Huffman tables are less than 2 KByte in size, so they are rarely evicted from L1 cache. 10% of 60 cycles = 6 cycles for fetching code and length from L1.

  2. #62
    Member
    Join Date
    Nov 2012
    Location
    Mountain View, CA
    Posts
    1
    Thanks
    0
    Thanked 0 Times in 0 Posts
    iz great.

    Hello folks. I had been on a hunt for a while for a lossless video codec to use in Krad Radio, but nothing really fit the goldilocks zone of CPU/Disk/Compression that I was looking for. Until now.

    I heard about iz yesterday, and was blown away. So I turned it into a video codec, which of course is nothing more than a muxed sequence of iz images.

    I'm getting very good perfomance, 50fps on 1080p, and this is single threaded, it will be very strait forward to multi thread. (threads processing separate frames)

    Here is the info on an example file in mkv container: https://gist.github.com/4167683

    Some of many use cases for this, for me, is recording camera output to disk for later re-compositing and re-encoding, and transfer of live lossless video over a gigabit ethernet IP network, instead of using HD-SDI and all of the expensive capture cards and cabling that goes along with it.

    This solves a big problem for me in its current state, and I am really happy with the result, of course I had a few thoughts about future work..

    -Multi-threaded encoding of a single frame
    - A hybrid codec with PNG or even Jpeg, PNG could be used for synthetic images (screencasting) when there is enough CPU for it, Jpeg could be used for situations when your doing network transfer of video, but the network gets over-saturated, so instead of lagging the connection/failing it can switch to jpeg as needed

    Thanks!

    David Richards

    http://kradradio.com


    oh ps. I called the codec Krad VHS .. for Krad Very High Stuff or Krad Home Video System :P

    The code I wrote for it is here: https://github.com/krad-radio/krad_r...s/krad_vhs.cpp

    I threw this together in just a couple hours when I was exhausted, so its a bit sloppy right now. This isn't really
    something you can make use of at the moment unless your using KR, compile this special branch and have libiz
    installed, so typical experimental software warning.
    Last edited by kradradio; 30th November 2012 at 01:42.

  3. #63
    Member cfeck's Avatar
    Join Date
    Jan 2012
    Location
    Germany
    Posts
    50
    Thanks
    0
    Thanked 17 Times in 9 Posts
    Hi David,

    for video compression, there should be better alternatives available. I have no clue how good they are, but someone mailed me that fast lossless video codecs can be 20x faster than PNG and compress similar to LS, because of the YUV color space, inter-frame and SIMD prediction. Check codecs such as UtVideo, FFV2, ICE9. Some of them may already be part of FFMPEG/libavcodec. More info on http://wiki.multimedia.cx/index.php?...s_Video_Codecs

    I quickly checked your code, and I guess you could avoid some copying by using a custom pixel accessor template. Please do not create a shared library from libiz, only link statically.
    Last edited by cfeck; 4th December 2012 at 02:36. Reason: typos

  4. #64
    Member
    Join Date
    Apr 2012
    Location
    Denmark
    Posts
    65
    Thanks
    21
    Thanked 16 Times in 14 Posts
    Good to see a fast image compressor with good performance. I have done a RAW image decompressor and find the biggest issue with current algorithms (which is Lossless JPEG in most cases) that they are unable to be decoded multithreaded. DNG 'solves' this by having tiles, where it encodes 512x512 pixel tiles separately - this works surprisingly well, but of course costs a bit of compression.

    Another possibility could be to start by storing the entire downward prediction, so you basically have the leftmost pixel of the entire image, and then an offset table to where every X line starts, so the decoder can pick to start at one or several lines at once. This leaves the most flexibility and the least coding. This of course means you cannot predict downwards everywhere - how much do you lose, if you only use left prediction in for instance every 16 lines?

    This would also enable you to easily have partial decoding, and you could even use the same offsets for duplicated lines.

    Edit: The reason why I think a line-based segmentation is better is that you operate in separate cache lines as opposed to tiles, where you will have many cache-overlaps between tiles.
    Last edited by sh0dan; 4th December 2012 at 13:33.

Page 3 of 3 FirstFirst 123

Similar Threads

  1. Unknown Moscow - Photo Gallery
    By encode in forum The Off-Topic Lounge
    Replies: 17
    Last Post: 23rd October 2013, 14:41
  2. FLIC - a new fast lossless image compressor
    By Alexander Rhatushnyak in forum Data Compression
    Replies: 25
    Last Post: 10th January 2013, 19:46
  3. Comparison of lossless PNG compression tools
    By Surfer in forum Data Compression
    Replies: 54
    Last Post: 19th September 2011, 22:58
  4. lossless data compression
    By SLS in forum Data Compression
    Replies: 21
    Last Post: 15th March 2011, 11:35
  5. Replies: 13
    Last Post: 2nd April 2010, 23:46

Tags for this Thread

Posting Permissions

  • You may not post new threads
  • You may not post replies
  • You may not post attachments
  • You may not edit your posts
  •