[Tim Buchheim]
Thu Jul 11 17:25:00 PDT 2002
Fixed Tcl "source" command so filenames with spaces work.
2. get rid of ios{Width,Precision,Mask} in TracedVar. By default the streams stuff isn't compiled in and no one was using this. (Saves 4B per TracedVar.)
3. converts tracedvars to use snprintf internally. This is an INCOMPATIBLE change: if you used to call tv->value(foo), you now need to call tv->value(foo, sizeof(foo)).
Delayed binding is only available if you configured tclcl and otcl with the option --enable-tclcl-classinstvar. (When doing so, btw, make sure you've got everyone built the same way, or you'll get random crashes due to mismatched slots in virtual function tables.)
Delayed binding are a way to do binding between OTcl and C++ that consume memory only when bound variables are actually in use from Tcl. (Normally bound variables consume memory throughout the lifetime of the object.) Delayed binding is therfore useful if you have an object with many instances (for example, an NsObject) whose bound variables are only occationaly used from Tcl (for example, off_cmn_).
Here's a minimal example of delayed binding:
class C5 : public TclObject { public: C5(); virtual int delay_bind_dispatch(const char *varName, const char *localName); virtual void delay_bind_init_all(); protected: int normal_; int delayed_; }; DEFINE_OTCL_CLASS(C5, "C5") { } C5::C5() { bind("normal_", &normal_); } void C5::delay_bind_init_all() { delay_bind_init_one("delayed_"); } C5::delay_bind_dispatch(const char *varName, const char *localName) { DELAY_BIND_DISPATCH(varName, localName, "delayed_", delay_bind, &delayed_); return TclObject::delay_bind_dispatch(varName, localName); }
From Tcl there should be no visible differences between delayed and normally bound variables.
The penality for delayed binding is higher run-time cost. Classes with delayed binding incur a search cost proprotional to class hierarchy depth (of that class) and the number of delay-bound variables in that class hierarchy.
(What happens with delayed binding is that you call the delay_bind_dispatch function for each instvar/set/get, this function percolates up the class hierarchy. (This is basically how command dispatch currently works.) With normal binding each instvar is put in a Tcl_Hash and accessed in O(1) time (but with higher memory requirements).
For a simple ns run with ~1400 NsObjects (~120 nodes/links/agents) and both of NsObject's variables delay-bound run-time went up ~2% (not clear if this is outside expected error) and memory went down 10%. For larger simulations run-time cost should stay constant memory savings should rise. (If you want to your own benchmarks, my test case was ./ns tcl/ex/many_tcp.tcl -client-arrival-rate 20 -ns-random-seed 1 -mem-trace 1.)
/* *Class Foo -superclass Bar * Foo is the base class for all foo-like objects blah blah. * * Foo public run {} * The run method causes a foo object to start running. */
lsort
to sort a list of SplitObjects (neeTclObjects) in increasing order, as
lsort -command SplitObjectCompare {list}
class TestClass : public TclObject { public: TestClass(); /* must have this constructor */ virtual int init(int argc, const char * const *argv) { int myarg; BEGIN_PARSE_ARGS(argc, argv); ARG(my_arg); END_PARSE_ARGS; /* ... use 'myarg' in the initialization of this object */ } int func1(int argc, const char * const *argv) { char *name; BEGIN_PARSE_ARGS(argc, argv); ARG(name); END_PARSE_ARGS; // ... } int func2(int argc, const char * const *argv) { BEGIN_PARSE_ARGS(argc, argv); END_PARSE_ARGS; // .... } } OTCL_MAPPINGS(TestClass, "TestClass") { INSTPROC(func1, "func1"); INSTPROC(func2, "func2"); }
proc tkerror
will also print its argument string when invoked.