X-Git-Url: http://plrg.eecs.uci.edu/git/?p=repair.git;a=blobdiff_plain;f=Repair%2FRepairCompiler%2Fcompile.plan;h=4b871631866c41a57dc6a9afba338da8101cd329;hp=075afbbd5a4d4398e20c32a6641a6f329b7edc61;hb=42d062915995953ad3e0eb1baf823a1db2fe3971;hpb=98152370df2804e762d8ec12dcc5afb4d45c0349 diff --git a/Repair/RepairCompiler/compile.plan b/Repair/RepairCompiler/compile.plan index 075afbb..4b87163 100755 --- a/Repair/RepairCompiler/compile.plan +++ b/Repair/RepairCompiler/compile.plan @@ -1,4 +1,168 @@ +today's todo: + +// INITIAL SUPPORT FINISHED - add support for three valued logic + +- create typing system which dynamically types the memory and revists + portions to double check the validity of claims. + +lets talk about the type checker. it is going to based loosely on what +the current implementation (brian's) does. the only difference is that +the initial data structure pointers ('d' in this case) need to be +checked... from then on, only pointers need to be checked. (though this +begs the question whether we will continue to attempt to + +................ + + + + + + + +short term todo ------------------------------------------------------- + +- add checks for each pointer dereference (before dereference!) to make + sure the pointer points to valid memory... this does not provide type + safety but it does provide memory safety + +- add support for three valued logic in the system... i suggest maybe a + goto mechanism or a exception mechanism... or maybe something a lot + more clever... anyway, gotta do three valued logic + +- start thinking of ways to deal with limiting space consumption (as + well as measuring it... (add methods to brian's code to dump sizes of + sets and what not... we want actual sizes... bytes.. perhaps + +- at some point it might be a good idea to handle cyclic graphs!!! + +-------------------- + + + +-- arg... 3 valued logic -------- +alright... so it turns out that three valued logic is needed because +doubly linked lists are not 1->1 ... they are infact 1->1/0... which is +not 1->1... and we can not expect the programmer to include guards all +the time .... so maybe's it is! + + + + +about typing *********************************************************** +ok... so it turns out that typing is not that easy. first off, there is +a distinction between typing the root types and then tying +pointers. when you start off you can go ahead and type the regions that +are global variables. these regions that are typed use a different +system for testing types. basically when you are typing something and +determining sizes of structures and what not its important not to try to +type data structures as you go along because you may find that it is +impossible getting in an infinite loop... instead you must just look at +the fields and verify that they are in the correct regions. you must do +this because otherwise it is impossible to type things... the claim +might be that given a heap and a set of accesses to that heap with +specific claims as to the types of these accesses, is it possible to +determine whether that access is well typed? it may be better to only +type what you actually need... for instance always do field type checks +... of course the question might be, when do you go ahead and type an +entire structure. + +now typing is of course a moot point in a memory-safe, type-safe +language like Java. + +there is another issue of explicit array bounds checks versus allowing +these to be checked by the typemaps. if you have two adjacent arrays +with the same types back to back then an out of bounds check on the +first may not fail in the typemap (unless it goes beyond the 2nd array +and you are lucky even in this case.) explicit array bounds checks may +be in order. + +the issue may boil down to this: what can we guarantee in a system where +we check things at a structure level in some places but at a field level +in other places? can we guarantee anything? if not, perhaps we should +just use the typemaps as valid memory checks at a field level so that we +don't get unexpected crashes at runtime... + +in short: this problem seems to be a lot more complicated then what was +perceived initially and the returns seem to be diminishing the more we +think about it. certainly the system now proposed (check root data +structures and any pointers accessed) seems to be inately flawed in some +sense (though i can not put my finger on it :) + +i think my time is better spent working on something else... ? +************************************************************************ + + +#STOPPED# + +about typemaps : allow flag to gear level of checks. nothing, bounds, +the types. implement bounds check for now. + + + +can we implement typemaps so that they are fast usually and slow if +something is wrong? + +we're interested in space. hit theoretically low without reaching +quadratic time + +create optimum benchmark sets... recognize typpical specs and generalize +heuristics to deal with these... + + + + +MARTIN's IDEA ************************************************* 7/9/03 +use memory/type safe language and check only on reads (when data is +accessed) check consistency properties only when the property needs to +be consistent... only when it needs the property to be true +********************************************************************** + +issue: what if you are building the model and you are adding elements to +a relation and the sets that compose those relations don't exist yet? +hmm... maybe the question boils down to a question of whether the items +being added to a relation are implied in the addition... that is if we +have: + +R: S->S +condition(a) => in R + +then if condition(a) => b in S then we never need to check for that +relations (R)'s type... we only need to check if condition(a) => b in S +hold ups (statically). + +if not, in this case the question is whether its better to do +a post exhaustive step or a per addition check ... in the per addition +step we will obviously need to ensure that the sets that compose the +domain and range of a relation are built prior to the rules that add +elements to relations (i'm thinking that its better to do a per addition +check because the post exhaustive step seems just as expensive and +messier!) + +question still remains though: do we need to build the sets prior? +well... of course we do unless we want the worklist to also trigger +rules based upon the additions to the sets of the respective +relations... of course we would only be interested if a particular +relation addition was blocked because of such a check... + +what do we do now? well we don't even properly include required +constraints in the required symbol list... that is being changed right +now ... we'll have to compare this with the older graph + +ok... the answer is that the graph changes once we make this change +... i'm checking out if cycles appear... i'm also going to consider the +problem of not having a well defined relation domain/range +(i.e. integers rather than a strictly typed set) + +------------------------------------------------------------------------ + +1. check at compile time how relations are used. if they are used in + both directions (meaning inverse and otherwise) generate both + simultaneously so that checks are fast .... otherwise only generate + one of them that is used... code for generation should automatically + handle the necessary switches and double-adding... + + new issues - it is not always possible to precalculate (or even calculate) an offset for a particular field... this would be the case if a field relied on a particular structure's fields and that fields'