Notes |
|
(0000265)
|
jonashaag
|
2016-02-01 19:44
|
|
This also applies to the x64 and sparc backends, possibly others. |
|
|
(0000266)
|
mohr
|
2016-02-01 20:13
|
|
Just as a note to document this: We have noticed a behavior related to this back in October 2015 regarding conservative garbage collection. The GC scans the stack for possible pointers and by default assumes that pointers are only pushed to the stack at addresses (offsets) aligned to the size of a pointer (4 bytes/8 bytes). This means that it scans the stack in steps of 4 (or 8) bytes for pointers. As the alignment guarantee does not hold for code generated by FIRM, the GC sometimes misses pointers on the stack and frees objects that are still reachable, resulting in extremely difficult to find bugs. The only viable workaround at this point was to set the assumed stack alignment to 1, which slows down the GC a lot (see http://pp.info.uni-karlsruhe.de/git/bdwgc/commit/?h=amd64-octopos&id=1da0df1e23c043ed011134648656c50b04f3f114 [^] ). Hence, producing only aligned stores would solve this problem as well.
|
|
|
(0000268)
|
Matze
|
2016-06-27 09:24
|
|
This has been fixed a while ago by b6787e36eb0d99eb28f4fb478932e0c9ed094e90 (+ later fixup commits).
The ++a; from the testcase is rejected for me (maybe someone else fixed that in the meantime). Changing the testcase to ++a[0]; gives me:
subl $4, %esp /* be_IncSP Iu[143:25] */
incw 2(%esp) /* ia32_IncMem T[125:10] t2.c:3:2 */
popl %edx /* ia32_Pop T[144:26] */ |
|