Results 1 to 9 of 9

Thread: WaveOne

  1. #1
    Member
    Join Date
    Feb 2010
    Location
    Nordic
    Posts
    200
    Thanks
    41
    Thanked 36 Times in 12 Posts

    WaveOne

    I bumped into this on the Internet and, finding there was no mention of it here yet, thought it needed its own thread

    http://www.wave.one/icml2017

    "As of today, WaveOne image compression outperforms all commercial codecs and research approaches known to us on standard datasets where comparison is available. Furthermore, with access to a GPU our codec runs orders of magnitude faster than other recent ML-based solutions: for example, we typically encode or decode the Kodak dataset at over 100 images per second."

  2. The Following 8 Users Say Thank You to willvarfar For This Useful Post:

    Alexander Rhatushnyak (23rd May 2017),Bulat Ziganshin (18th May 2017),comp1 (18th May 2017),Darek (18th May 2017),encode (20th May 2017),Jarek (20th May 2017),necros (20th May 2017),wilon (20th May 2017)

  3. #2
    Member
    Join Date
    Jul 2014
    Location
    Mars
    Posts
    164
    Thanks
    115
    Thanked 10 Times in 9 Posts
    too bad no price, no trial version

  4. #3
    Member nikkho's Avatar
    Join Date
    Jul 2011
    Location
    Spain
    Posts
    542
    Thanks
    214
    Thanked 163 Times in 104 Posts
    Quote Originally Posted by necros View Post
    too bad no price, no trial version
    I guess that it is even not yet implemented.

  5. #4
    Member
    Join Date
    Nov 2013
    Location
    Kraków, Poland
    Posts
    645
    Thanks
    205
    Thanked 196 Times in 119 Posts
    I have mentioned a few months ago that there is coming revolution in multimedia compression with GANs ( https://en.wikipedia.org/wiki/Genera...arial_networks ) ... and here it is.

    their arxiv: https://arxiv.org/pdf/1705.05823.pdf

  6. #5
    Member
    Join Date
    Jun 2009
    Location
    Kraków, Poland
    Posts
    1,471
    Thanks
    26
    Thanked 120 Times in 94 Posts
    Isn't that going to distort images in unpredictable ways? I mean, assuming it relies on some pre-trained models, then the reconstructed details will depend on the dataset on which the program was trained on. So if the program was trained on dogs and we're compressing cats, then as the compression ratio goes higher we'll have dogs' features reconstructed in cats' images. Or I'm missing something and there are no pre-trained models in this compression scheme?

  7. #6
    Member
    Join Date
    May 2017
    Location
    Vietnam
    Posts
    1
    Thanks
    1
    Thanked 0 Times in 0 Posts
    Can someone knowledgable here read and tell it's real or bullshit? There's a part like this in the paper:
    Finally, PiedPiper has recently claimed to employ ML techniques in itsMiddle-Out algorithm (Judge et al., 2016), although their nature is shrouded in mystery
    But all their other arguments seems very sensible and serious, I'm not yet clever enough to tell if this is a serious paper or a joke one. Can we summon Matt here?

  8. #7
    Programmer Bulat Ziganshin's Avatar
    Join Date
    Mar 2007
    Location
    Uzbekistan
    Posts
    4,497
    Thanks
    733
    Thanked 659 Times in 354 Posts
    Quote Originally Posted by Piotr Tarsa View Post
    Isn't that going to distort images in unpredictable ways? I mean, assuming it relies on some pre-trained models, then the reconstructed details will depend on the dataset on which the program was trained on. So if the program was trained on dogs and we're compressing cats, then as the compression ratio goes higher we'll have dogs' features reconstructed in cats' images. Or I'm missing something and there are no pre-trained models in this compression scheme?
    it's well-known fact that people trained on recognizing europeans' faces, are less successful in recognizing faces of asians and vice versa. so, why not? just select a proper training set if you are going to compress cats

  9. #8
    Member
    Join Date
    Nov 2013
    Location
    Kraków, Poland
    Posts
    645
    Thanks
    205
    Thanked 196 Times in 119 Posts
    It is not a problem to do the training right - accordingly to types of images it will be actually used for - the arxiv says they use Yahoo Flickr Creative Commons 100 Million dataset.
    It is used here by GAN's discriminator network to fit the features to the space of real-life images.

    ps. the Silicon Valley reference is obviously a joke, but the rest seems legit ... this TV series has brought some (very skewed) spotlight to data compression field ... at least that everybody in this industry wants to be seen as the Pierd Piper ...

  10. #9
    Member Alexander Rhatushnyak's Avatar
    Join Date
    Oct 2007
    Location
    Canada
    Posts
    232
    Thanks
    38
    Thanked 80 Times in 43 Posts
    Quote Originally Posted by willvarfar View Post
    http://www.wave.one/icml2017

    "Here is how different image codecs compress an example 480x480 image to a file size of 2.3kB"

    Guess their 2.3 kB image must be friends with a 10 mB compressed model file, and furthermore there should be hundreds of such models.

    This newsgroup is dedicated to image compression:
    http://linkedin.com/groups/Image-Compression-3363256

  11. The Following User Says Thank You to Alexander Rhatushnyak For This Useful Post:

    encode (23rd May 2017)

Posting Permissions

  • You may not post new threads
  • You may not post replies
  • You may not post attachments
  • You may not edit your posts
  •