Results 1 to 24 of 24

Thread: Fuzz testing

  1. #1
    Member m^2's Avatar
    Join Date
    Sep 2008
    Location
    Ślůnsk, PL
    Posts
    1,612
    Thanks
    30
    Thanked 65 Times in 47 Posts

    Fuzz testing

    This post is a shameless plug. I played a bit with fuzzing and found many bugs. I believe most codec developers will find the story about it useful:
    https://extrememoderate.wordpress.co...g-compressors/

  2. The Following 5 Users Say Thank You to m^2 For This Useful Post:

    Bulat Ziganshin (17th November 2015),Cyan (17th November 2015),jibz (17th November 2015),Jyrki Alakuijala (17th November 2015),Turtle (17th November 2015)

  3. #2
    Member
    Join Date
    Jul 2013
    Location
    United States
    Posts
    194
    Thanks
    44
    Thanked 140 Times in 69 Posts
    It's great to see someone else working on this. I've been doing a bit of fuzz testing of Squash plugins, too (I've recently started trying to keep a list of results). Eventually I plan to do at least some fuzzing of all Squash plugins.

    Would you be interested in collaborating on this? Even if you don't want to use Squash for the testing, we could share patches for things like disabling checksums, and maybe even avoid duplicating fuzzing efforts. I've also been thinking it might be a good idea to create a git repository with the tests that cause issues to help avoid regresssions.

    One thing I would like to suggest: use a directory on a tmpfs file system. AFL can be pretty tough on hard drives, and if you have enough RAM there is no reason to subject your HDD to that kind of abuse.

  4. The Following 2 Users Say Thank You to nemequ For This Useful Post:

    Bulat Ziganshin (17th November 2015),jibz (17th November 2015)

  5. #3
    Member m^2's Avatar
    Join Date
    Sep 2008
    Location
    Ślůnsk, PL
    Posts
    1,612
    Thanks
    30
    Thanked 65 Times in 47 Posts
    For now, I'm into fuzzing zstd. I don't know what to do next. Either:
    * nothing
    * fuzz something else
    * do some concolic testing

    The choice depends on:
    * whether there will be fixes to bugs that I found when I'm done with zstd
    * whether I don't get bored
    * whether I manage to prepare concolic test tools. I found nothing as straightforward as afl.

    If I'll stay with fuzzing, I need a target that meets the needs that I described. The only things that I see are
    * made by large corporations which have enough QA, I don't need to work for them
    * specialised codecs, f.e. image

    I may be missing something.

    Regardless of choice I'm definitely into working together, at least with what I have now.

    I wonder what is a better fuzz target, an official frontend or squash plugin.
    Official frontend has more code that deserves fuzzing, though the code is likely less interesting. Squash core is well fuzzed, I guess. Now 2 questions.
    What is usually more complex, squash or official frontend, when tested on the default compression / decompression path? The simpler frontend, the fewer paths unrelated to the main algorithm.
    What is usually faster? Speed is critical...
    Is there a way to make selective instrumentation? If you instrumented only the library and ignored everything else you would get fewer cases that cover your core code.
    Last edited by m^2; 17th November 2015 at 10:12.

  6. #4
    Member jibz's Avatar
    Join Date
    Jan 2015
    Location
    Denmark
    Posts
    114
    Thanks
    91
    Thanked 69 Times in 49 Posts
    I think one of the benefits of fuzzing through squash is that it provides a common interface to a lot of codecs, so it should be possible to automate a lot of the setup.

    A drawback is that you do not test the official frontend, which may serve as an example for users.

  7. #5
    Member
    Join Date
    Nov 2015
    Location
    ?l?nsk, PL
    Posts
    81
    Thanks
    9
    Thanked 13 Times in 11 Posts
    When I establish the test and then do nothing for a week, I don't need a common interface.
    However, it may be used to lower the barier for entry for others. I imagine downloading squesh, typing 'make fuzz' and have the makefile automatically download afl, build it, build squash with instrumentation and download the most recent initial test cases. './fuzz' and the thing runs.
    I wonder why there's no project similar to folding@home that does fuzz testing.
    At least I don't see it here

    Another benefit of a common frontend would be support for compression verification. Even if it's not there already, that's just one place to add it. And bugs that manifest in decompression not being a reverse of compression are both common and highly severe.

  8. #6
    Member
    Join Date
    Jul 2013
    Location
    United States
    Posts
    194
    Thanks
    44
    Thanked 140 Times in 69 Posts
    Quote Originally Posted by m^2 View Post
    * whether there will be fixes to bugs that I found when I'm done with zstd
    I wouldn't worry about that. Assuming you've reported them through proper channels (either the GitHub issue tracker or, if you're worried about security, e-mail) I'm sure Yann will fix them.

    Quote Originally Posted by m^2 View Post
    I wonder what is a better fuzz target, an official frontend or squash plugin.
    Official frontend has more code that deserves fuzzing, though the code is likely less interesting. Squash core is well fuzzed, I guess. Now 2 questions.
    What is usually more complex, squash or official frontend, when tested on the default compression / decompression path? The simpler frontend, the fewer paths unrelated to the main algorithm.
    They're probably just about equal, and any path you can alter by changing the input data (as opposed to command line parameters) aren't really relevant except that they can effect execution time. That said, it's probably a bit better to use the official frontend just because library authors are more likely to accept your bug report; there isn't ever a doubt that the issue is in their code.

    Quote Originally Posted by m^2 View Post
    What is usually faster? Speed is critical...
    Again, it *probably* doesn't really matter. The official one may be slightly faster since they don't have to worry about conforming to a specific API, but generally the overhead added by Squash is pretty trivial.

    Quote Originally Posted by m^2 View Post
    Is there a way to make selective instrumentation? If you instrumented only the library and ignored everything else you would get fewer cases that cover your core code.
    There isn't anything built in, but you could always add some flags to the plugin's CMakeLists.txt and not to the core library. That said, does it really make a difference? If it's impossible to reach an execution path, does it really impact AFL's performance?

  9. #7
    Member m^2's Avatar
    Join Date
    Sep 2008
    Location
    Ślůnsk, PL
    Posts
    1,612
    Thanks
    30
    Thanked 65 Times in 47 Posts
    Quote Originally Posted by nemequ View Post
    I wouldn't worry about that. Assuming you've reported them through proper channels (either the GitHub issue tracker or, if you're worried about security, e-mail) I'm sure Yann will fix them.
    Yann fixes bugs. What I reported is OK already. I finished fuzzing zstd compression (though an option to verify that data decompresses correctly would make me restart the effort) and have spent many days on decompression. I keep finding new paths. Once I'm done with it I need something new to do. I could switch back to another project if that project's author or maintainer fixed outstanding bugs that I reported. OK, heatshrink is clean too, but the author stated that he intends to fuzz it again, so I think I'll skip it for now.
    I thought to do xwrt, but fail to compile it with afl so far.

    Quote Originally Posted by nemequ View Post
    They're probably just about equal, and any path you can alter by changing the input data (as opposed to command line parameters) aren't really relevant except that they can effect execution time. That said, it's probably a bit better to use the official frontend just because library authors are more likely to accept your bug report; there isn't ever a doubt that the issue is in their code.
    I have found issues in official frontends. So it does matter.

    Quote Originally Posted by nemequ View Post
    Again, it *probably* doesn't really matter. The official one may be slightly faster since they don't have to worry about conforming to a specific API, but generally the overhead added by Squash is pretty trivial.
    OK.

    There isn't anything built in, but you could always add some flags to the plugin's CMakeLists.txt and not to the core library. That said, does it really make a difference? If it's impossible to reach an execution path, does it really impact AFL's performance?
    I'm rather interested in avoiding doing the same path twice because a change in execution on something uninteresting made afl consider it to be different.
    As a bolder example when testing compression with verification of decompression, I'd rather not have decompression instrumented. I don't care about paths there as I'll test it separately anyway. Changing makefile is something I have to check.

    Now, I have a number of test files that deserve being put somewhere. Really, I think that the best place would be the author's repo. But I may as well post them here. What do you think?

  10. #8
    Member
    Join Date
    Jul 2013
    Location
    United States
    Posts
    194
    Thanks
    44
    Thanked 140 Times in 69 Posts
    Quote Originally Posted by m^2 View Post
    I have found issues in official frontends. So it does matter.
    Right, but they're still bugs in *their* repo. If there is no third-party code "tainting" the results there isn't really any room for doubt as to where the blame lies.

    OTOH, I'd certainly appreciate knowing about any bugs in Squash. I've found (and fixed) issues before thanks to fuzzing, so I *think* it's pretty stable now, but obviously I can't make any promises. It's also possible to find issues in the plugins instead of the core, I haven't fuzzed them all.

    Quote Originally Posted by m^2 View Post
    As a bolder example when testing compression with verification of decompression, I'd rather not have decompression instrumented. I don't care about paths there as I'll test it separately anyway. Changing makefile is something I have to check.
    Have you found a lot of bugs in compression code? I've only been fuzzing decompression so far…

    Quote Originally Posted by m^2 View Post
    Now, I have a number of test files that deserve being put somewhere. Really, I think that the best place would be the author's repo. But I may as well post them here. What do you think?
    Right, I do too. How about we create a repository somewhere? GitHub would be convenient for me, but I'd be fine with GitLab, BitBucket, etc.

    If we use GitHub we could set it up to have Travis test them all automatically whenever we make a change. If we make Squash a submodule then just updating Squash could trigger a run.

    One thing we should decide on soon, though, is a disclosure policy. Obviously these issues have security implications, so it would probably be wise to hold of on test files publicly until the issues are fixed (or a reasonable amount of time has elapsed). I'd suggest something like 1 month after reporting them if we don't receive a response, or up to 6 months if they confirm the issue and that they're working on it.

  11. #9
    Member
    Join Date
    Jul 2013
    Location
    United States
    Posts
    194
    Thanks
    44
    Thanked 140 Times in 69 Posts
    I just created a repository on GitHub: https://github.com/nemequ/compfuzz. Obviously it's just an early, fairly incomplete version… if anyone else has anything to add I'd be quite happy to do so. If anyone else has done any fuzz testing I'd be happy to add information to the results, even if it is of your own library; I don't think results need to be independent here. Or, if you have a GitHub account, you can add it yourself (the results page is a wiki).

    m^2, I'd be happy to add you to the project.

  12. The Following 3 Users Say Thank You to nemequ For This Useful Post:

    Bulat Ziganshin (21st November 2015),Cyan (21st November 2015),Kennon Conrad (21st November 2015)

  13. #10
    Member
    Join Date
    Jul 2013
    Location
    United States
    Posts
    194
    Thanks
    44
    Thanked 140 Times in 69 Posts
    m^2, I'm trying to add your results to the table on the Results wiki page. Can you provide details about if/when the issues you found were fixed (and, preferably, when they were reported)?

  14. #11
    Member
    Join Date
    Sep 2007
    Location
    Denmark
    Posts
    856
    Thanks
    45
    Thanked 104 Times in 82 Posts
    M^2

    In your wordpress you "complain" about only having 2 old cores to work with.
    If you need i can setup my compression machine with teamview share so you can use it. it will give me reason to finally build the machine i believe its 4x xeon based in the I7 900 series with 6 cores and hyper threading and 48gb of ram.

  15. #12
    Member
    Join Date
    Aug 2010
    Location
    Seattle, WA
    Posts
    79
    Thanks
    6
    Thanked 67 Times in 27 Posts
    I love this work on Fuzz testing - it's extremely valuable. I saw how you tried to fuzz test LZHAM and encountered some problems. It's a very high priority for me to fix this and try fuzzing it myself.

  16. #13
    Member
    Join Date
    Jul 2013
    Location
    United States
    Posts
    194
    Thanks
    44
    Thanked 140 Times in 69 Posts
    Quote Originally Posted by rgeldreich View Post
    I love this work on Fuzz testing - it's extremely valuable. I saw how you tried to fuzz test LZHAM and encountered some problems. It's a very high priority for me to fix this and try fuzzing it myself.
    I'm not sure why he had problems getting afl-fuzz to run on LZHAM, it doesn't seem to be an issue for me.

    My problem is that it seems to be very easy to cause LZHAM to take a *very* long time to decompress even very small files… I have one 4K file which I lost patience waiting for lzhamtest to decompress after 5 minutes on my Xeon E3-1225 v3, it may be caught in an infinite loop. That causes afl-fuzz to be terribly slow. I'll send you an e-mail with some problem files from a quick run. Once those are fixed hopefully I can do some real testing.

  17. #14
    Member m^2's Avatar
    Join Date
    Sep 2008
    Location
    Ślůnsk, PL
    Posts
    1,612
    Thanks
    30
    Thanked 65 Times in 47 Posts
    Quote Originally Posted by nemequ View Post
    I just created a repository on GitHub: https://github.com/nemequ/compfuzz. Obviously it's just an early, fairly incomplete version… if anyone else has anything to add I'd be quite happy to do so. If anyone else has done any fuzz testing I'd be happy to add information to the results, even if it is of your own library; I don't think results need to be independent here. Or, if you have a GitHub account, you can add it yourself (the results page is a wiki).

    m^2, I'd be happy to add you to the project.
    Nice, though really I would be happier to work with a site that has nothing to do with git.
    Nevertheless, what I miss is regular datasets that provide high coverage in afl.
    Quote Originally Posted by nemequ View Post
    m^2, I'm trying to add your results to the table on the Results wiki page. Can you provide details about if/when the issues you found were fixed (and, preferably, when they were reported)?
    I'll try later.

    Quote Originally Posted by SvenBent View Post
    M^2

    In your wordpress you "complain" about only having 2 old cores to work with.
    If you need i can setup my compression machine with teamview share so you can use it. it will give me reason to finally build the machine i believe its 4x xeon based in the I7 900 series with 6 cores and hyper threading and 48gb of ram.
    That would be superb!
    But I have to note that on my current rig the setup writes very roughly 3.5 TB of temporary files per month. So it requires any storage that is *not* a flash SSD. Though OS should be able to discard nearly all of this traffic, I wouldn't take the risk, especially on someone else's rig.

    Quote Originally Posted by nemequ View Post
    I'm not sure why he had problems getting afl-fuzz to run on LZHAM, it doesn't seem to be an issue for me.

    My problem is that it seems to be very easy to cause LZHAM to take a *very* long time to decompress even very small files… I have one 4K file which I lost patience waiting for lzhamtest to decompress after 5 minutes on my Xeon E3-1225 v3, it may be caught in an infinite loop. That causes afl-fuzz to be terribly slow. I'll send you an e-mail with some problem files from a quick run. Once those are fixed hopefully I can do some real testing.
    Once I compiled LZHAM with the settings mentioned in the blog post it wouldn't run at all.

  18. #15
    Member
    Join Date
    Jul 2013
    Location
    United States
    Posts
    194
    Thanks
    44
    Thanked 140 Times in 69 Posts
    Quote Originally Posted by m^2 View Post
    Nice, though really I would be happier to work with a site that has nothing to do with git.
    GitHub does let you clone projects over subversion… I'm not sure about support for pushing, but you can always file an issue with a patch just like in the days before distributed version control.

    A couple years ago I may have been more receptive of the idea of using something like bazaar, mercurial, or darcs. At this point, though, I think it's pretty clear that git has "won". I occasionally see bazaar and mercurial repositories, but they've mostly disappeared and both can communicate bidirectionally with git repositories anyways so people who prefer them are free to use them. I can't even remember the last time I saw a darcs or arch repo.

    Most programmers these days are comfortable with git, so using it means a lower barrier to entry for most people. I'm not sure why you don't like git, but I've never had enough of a problem with it to justify throwing out that advantage.

    Quote Originally Posted by m^2 View Post
    Nevertheless, what I miss is regular datasets that provide high coverage in afl.
    Yes, I would definitely like to find some more inputs. If anyone has any data that would be appropriate I would greatly appreciate them sharing.

    Quote Originally Posted by m^2 View Post
    But I have to note that on my current rig the setup writes very roughly 3.5 TB of temporary files per month. So it requires any storage that is *not* a flash SSD. Though OS should be able to discard nearly all of this traffic, I wouldn't take the risk, especially on someone else's rig.
    Like I mentioned earlier, I would strongly recommend using a tmpfs (or whatever non-Linux platforms have. For Windows there is ImDisk, I assume other systems have something similar). It also helps to read from /dev/stdin and write to /dev/null if you can…

    Quote Originally Posted by m^2 View Post
    Once I compiled LZHAM with the settings mentioned in the blog post it wouldn't run at all.
    It works for me using just AFL_HARDEN=1 and -fsanitize=address. I know LZHAM has some issues with tsan, maybe that is triggering something in ubsan as well. Anyways, I'd wait for Rich to fix the issues I already informed him about before trying again… it's unreasonably slow right now.

  19. #16
    Member
    Join Date
    Nov 2015
    Location
    ?l?nsk, PL
    Posts
    81
    Thanks
    9
    Thanked 13 Times in 11 Posts
    Quote Originally Posted by nemequ View Post
    GitHub does let you clone projects over subversion… I'm not sure about support for pushing, but you can always file an issue with a patch just like in the days before distributed version control.
    It lets me download zip files too which is what I use normally.
    However, by putting my work there I promote git and that's not something that I like.
    It's my pet-peeve of a kind, but git is the only strong-copyleft-encumbered software that I keep using. And the only copyleft-encombered one that I use not because I find it good but because I just have to.
    Actually I tried jgit, but it's too basic for serious work.

    Quote Originally Posted by nemequ View Post
    A couple years ago I may have been more receptive of the idea of using something like bazaar, mercurial, or darcs. At this point, though, I think it's pretty clear that git has "won". I occasionally see bazaar and mercurial repositories, but they've mostly disappeared and both can communicate bidirectionally with git repositories anyways so people who prefer them are free to use them. I can't even remember the last time I saw a darcs or arch repo.
    I agree, git has won. This saddens me a lot.

    Quote Originally Posted by nemequ View Post
    Most programmers these days are comfortable with git, so using it means a lower barrier to entry for most people. I'm not sure why you don't like git, but I've never had enough of a problem with it to justify throwing out that advantage.
    I'm not sure of that's true, many people who I talk with don't like git. But all are proficient in it.
    Nevertheless for me the main reason not to use it is that it's pushing copyleft at people. It's being pushed at me and I don't like it. I don't want to do it to others.

    Quote Originally Posted by nemequ View Post
    Yes, I would definitely like to find some more inputs. If anyone has any data that would be appropriate I would greatly appreciate them sharing.
    IMHO if it produces an afl path not exercised by any other input present, it's an appropriate input.

    Quote Originally Posted by nemequ View Post
    Like I mentioned earlier, I would strongly recommend using a tmpfs (or whatever non-Linux platforms have. For Windows there is ImDisk, I assume other systems have something similar). It also helps to read from /dev/stdin and write to /dev/null if you can…
    I use a hard disk and it works great. I don't think the writes touch the storage medium as sustained creation of over 2000 files per second is not something it would be able to do. The OS does the job fine, but YMMV.

    Quote Originally Posted by nemequ View Post
    It works for me using just AFL_HARDEN=1 and -fsanitize=address. I know LZHAM has some issues with tsan, maybe that is triggering something in ubsan as well. Anyways, I'd wait for Rich to fix the issues I already informed him about before trying again… it's unreasonably slow right now.
    I'm seeing ASAN errors. Maybe my clang version is newer than yours, 3.7. I've seen it finding bugs that others can't see.
    Last edited by m^3; 30th November 2015 at 09:59.

  20. #17
    Member
    Join Date
    Nov 2015
    Location
    ?l?nsk, PL
    Posts
    81
    Thanks
    9
    Thanked 13 Times in 11 Posts
    The compfuzz repo misses one thing. Versioning.
    You have added a lot of zstd crash files. But for which zstd are they?
    I don't think that a git repo is the best place to share such files. I think a bug tracker where you can add context would have been better.

    BTW, I have a tip. Zstd has introduced an option to disable legacy decoders, ZSTD_LEGACY_SUPPORT=0 in lib/Makefile
    These are meant to be removed in 1.0, so IMHO they are not fuzz-worthy.
    Last edited by m^3; 30th November 2015 at 10:20.

  21. #18
    Member
    Join Date
    Jul 2013
    Location
    United States
    Posts
    194
    Thanks
    44
    Thanked 140 Times in 69 Posts
    Quote Originally Posted by m^3 View Post
    IMHO if it produces an afl path not exercised by any other input present, it's an appropriate input.
    And is small enough. Input files don't even need to be all that different; whether or not a new path is exercised depends on the codec, and the ones in input/ are meant to be generic. They can than be supplemented with the old crashes and any codec-specific inputs, then afl-cmin should be used to trim the input corpus down for a particular plugin.

    Quote Originally Posted by m^3 View Post
    I use a hard disk and it works great. I don't think the writes touch the storage medium as sustained creation of over 2000 files per second is not something it would be able to do. The OS does the job fine, but YMMV.
    I haven't had any problems with hard disks yet, but when I use one for AFL I see the activity light for the disk going crazy. The OS may take care of a lot of writes, but not all of them. I don't think it's a good idea to tempt fate when it's so easy to use a tmpfs…

    Quote Originally Posted by m^3 View Post
    I'm seeing ASAN errors. Maybe my clang version is newer than yours, 3.7. I've seen it finding bugs that others can't see.
    I have 3.7, too, but I usually use afl-gcc not afl-clang. Just wish there were a Fedora package for afl-clang-fast.

    Quote Originally Posted by m^3 View Post
    The compfuzz repo misses one thing. Versioning.
    You have added a lot of zstd crash files. But for which zstd are they?
    Does it matter? I figure once the issues have been fixed the only reasons to keep the files around are to make sure there are no regressions, and to feed them into future runs as interesting non-crashing inputs.

    Quote Originally Posted by m^3 View Post
    I don't think that a git repo is the best place to share such files. I think a bug tracker where you can add context would have been better.
    It would be easy enough to add a text file to add context information, but again, I'm not sure what the use case is. Context information should go to the upstream project's issue tracker (or e-mail if they use GitHub's terrible issue tracker which doesn't support private issues for security bugs). The results page on the wiki should link to the issue in the relevant issue tracker, and (once the issue is fixed), the fix.

    Or, if you want, CompFuzz does have an issue tracker (GitHub's terrible issue tracker). You can also add comments on GitHub to files or commits.

    A repository is a good way to have all the information in one place so it is easy to re-test the files with newer software. Also, it's a nice home for patches to upstream libraries to disable checksums and return `EXIT_SUCCESS` even if decompression fails.

    Quote Originally Posted by m^3 View Post
    BTW, I have a tip. Zstd has introduced an option to disable legacy decoders, ZSTD_LEGACY_SUPPORT=0 in lib/Makefile
    These are meant to be removed in 1.0, so IMHO they are not fuzz-worthy.
    Good point. It would make sense to have different directories for previous versions of zstd until 1.0 is out. All the current data should be for <= 0.3, so I'll move it to zstd-0.3 and, in the future, use zstd-0.x until 1.0 (which can just be zstd).

    It might be a good idea to add a patch for zstd which disables legacy support in the Makefile, too.

  22. #19
    Member
    Join Date
    Sep 2008
    Location
    France
    Posts
    856
    Thanks
    447
    Thanked 254 Times in 103 Posts
    Quote Originally Posted by nemequ View Post
    It might be a good idea to add a patch for zstd which disables legacy support in the Makefile, too.
    Starting v0.4.1, there will be an option to build from Makefile without legacy support :

    Code:
    make ZSTD_LEGACY_SUPPORT=0

  23. #20
    Member
    Join Date
    Jul 2013
    Location
    United States
    Posts
    194
    Thanks
    44
    Thanked 140 Times in 69 Posts
    Have the issues you found in bsc, density, and zpaq been fixed? If so, would you be willing to share the problematic files so I can add them to my repository so they can be used for future fuzzing?

    Have you reported them? If so, how and when?

    Do you still have, and would you be willing to share, any of the patches you had to apply?

    In case you can't tell, I'm trying to add the results to the CompFuzz Results page… I know it will never be exhaustive, but I'd like to come as close as I can.

  24. #21
    Expert
    Matt Mahoney's Avatar
    Join Date
    May 2008
    Location
    Melbourne, Florida, USA
    Posts
    3,255
    Thanks
    306
    Thanked 778 Times in 485 Posts
    I have a bunch of fuzzed archives that make zpaq crash. I have yet to fix it.

  25. #22
    Member m^2's Avatar
    Join Date
    Sep 2008
    Location
    Ślůnsk, PL
    Posts
    1,612
    Thanks
    30
    Thanked 65 Times in 47 Posts
    Quote Originally Posted by nemequ View Post
    Have the issues you found in bsc, density, and zpaq been fixed?
    All 3 reported months ago, none fixed.
    BSC: No contact with the author, nobody else volunteered to fix them.
    Density: Reported here, no reply.
    ZPAQ: Matt replied promptly that he was willing to fix, but needs time. As stated, no fixes so far.
    Quote Originally Posted by nemequ View Post
    Do you still have, and would you be willing to share, any of the patches you had to apply?
    If memory serves me well, you have committed equivalents already. But now I'm not sure if that's for all of them.

  26. The Following User Says Thank You to m^2 For This Useful Post:

    nemequ (10th January 2016)

  27. #23
    Member
    Join Date
    Jul 2013
    Location
    United States
    Posts
    194
    Thanks
    44
    Thanked 140 Times in 69 Posts
    Quote Originally Posted by m^2 View Post
    All 3 reported months ago, none fixed.


    Quote Originally Posted by m^2 View Post
    BSC: No contact with the author, nobody else volunteered to fix them.
    Eh, I hate it when people disable the issue tracker and only accept PRs. Did you try e-mailing Ilya?

    If not (IIRC you like to keep your e-mail private), then assuming you also don't want to post the problematic archives publicly, if you want to upload them somewhere and PM me the URL (or encrypt them with my PGP key and post them publicly) I can try to contact him; he does respond to e-mail.

    Quote Originally Posted by m^2 View Post
    Density: Reported here, no reply.
    Thanks. I just filed an issue against the density tracker, too.

    Quote Originally Posted by m^2 View Post
    ZPAQ: Matt replied promptly that he was willing to fix, but needs time. As stated, no fixes so far.
    Okay.

    Quote Originally Posted by m^2 View Post
    If memory serves me well, you have committed equivalents already. But now I'm not sure if that's for all of them.
    I only have patches for bzip2, pithy, zlib, and zling. If you still have others I'd love copies. If not then don't worry about it, I'll just put them together next time I fuzz the relevant libraries.

  28. #24
    Member m^2's Avatar
    Join Date
    Sep 2008
    Location
    Ślůnsk, PL
    Posts
    1,612
    Thanks
    30
    Thanked 65 Times in 47 Posts
    Quote Originally Posted by nemequ View Post
    Eh, I hate it when people disable the issue tracker and only accept PRs. Did you try e-mailing Ilya?
    That's exactly what I did.

    Quote Originally Posted by nemequ View Post
    I only have patches for bzip2, pithy, zlib, and zling. If you still have others I'd love copies. If not then don't worry about it, I'll just put them together next time I fuzz the relevant libraries.
    I'll try to post something next week.

Similar Threads

  1. Testing compressors with artificial data
    By Matt Mahoney in forum Data Compression
    Replies: 0
    Last Post: 25th April 2013, 08:22
  2. What's wrong with my testing script?
    By m^2 in forum Data Compression
    Replies: 20
    Last Post: 21st September 2008, 19:24
  3. Help beta testing QuickLZ 1.40 with the new test framework
    By Lasse Reinhold in forum Forum Archive
    Replies: 10
    Last Post: 19th April 2008, 16:16

Posting Permissions

  • You may not post new threads
  • You may not post replies
  • You may not post attachments
  • You may not edit your posts
  •