It is very relevant that LLVM look into handeling HDL and other binary
and analogue operation modeling capbilities, as well as expand this
abstractly above in the other direction to include complex structure
optimization that is critical in realtime, dynamic and VM operations.
Without confirming the true characteristics of the lower structure
types and operating characteristics (especially physical
implementation) it is not possible to coherently optimize complex code
sequences, especially with wide varieties of influences on abstract
Obviously, all characteristics of physical system implmentation should
be included in the standard techniques.
More importantly, given most of that model data (result values) is
fairly finite and known variable data ranges for most target
implementations, one must propogate these analytics to higher models.
There is no difference between the physics simulation techniques used
to confirm the physical chips and the same modeling through operations
sequences all the way up to complex physical modeling.
Here, though, is an immediate need to be able to model and optimize
for entire computational systems regardless of scope.
GPU, fpga, solid asic, cpu and memory... various signaling techniques
between locally... interdependencies beyond local arrangements.
The need to handle "signals propogation" models, including the physics
of telecom and the hardware characteristics of intermediate devices,
mandates a coherent and comprehensive modeling technique through the
entire scope of influencing structures.
Specificly: you can NOT design optimal multi proc or diverse proc
implementations without knowing everything (composite model) about
Typically one designs systems based on the entire system.
Code optimization must accomodate this awareness as well as introduce
advanced modeling to more simple designs. (conventional code
I will indicate as well that complex physical systems also effect
computational optimization. Heat at a data center or on fiber mutex
causes some deviance from ideal specs. It is hot on that wire. Your
computation will stall due to telecom latency that CAN ALWAYS BE
MODELED and accomodated. Sure, if the last packet from remote proc was
late, you can guess about the next, but if you are running at midday
summer and this happens daily, changing the scheduling order is
I will also indicate that the set of variable characteristics used in
process modeling is identical numerically to that used in higher level
physics simulation. Everything from atomic physics (fpga) through
automobile heat and mechanical force transfer (turbulance and
environmental conditions at repeater) (wire) are of the exact same
processing technique and data set (superset with only the processing
Both in order to accurately simulate and optimize modern computational
models, and due to the fundamental similarities in computational
process, LLVM should expand to handle ALL lower dependencies (hdl/etc)
(which are an obvious need anyways) as well as applying the techniques
of process modeling to all other similar mechanisms.
You will note that 3rd party scientific processes could easily be
optimized intrinsicly when the scope of their numerical model is
known, which depends on such things as materials physics specs and fea
modeling of ... the exact same systems that are thus selected to run
the model on.
There are few modern abstract modeling languages and systems, and most
are restricted use and highly isolated.
Code depends on hardware, operations depend on physical reality.
Obviously there is a need to fabricate the most comprehensive system
model now (all data options handled), allow optimal techniques to be
introduced later, and generate a finite set of modeling techniques
that handle most of the required situations.
I look forward to your response and discussions of this topic.
-Wilfred L. Guerin
aim/msn/yp/gt/sk/etc "WilfredGuerin" icq 105758521