Making it efficient

This model is quite simple, but I guess it may be easily applied to real world with some additions for initial capital, required controlling quality functions and resources flow shared evenly for creds (is subtracted from sales profit).

When all the data with profitability stats are available to all workers it is quite easy to get clear and reasonable profit distribution among the workers. If we assume that the work of sales (marketing) and efficiency tuning (a kind of management) can be measured in creds and then in real values, so the control function may be handled just as some sort of ordinary work and added to overall work pool.

So we have a highly human involved work pool, independent of exact type with ability to tune it with positive feedback loop from sales and efficiency tuning. Moreover it is absolutely 100% clear to all attendants and the contribution of each employee activity can be tracked with clear stats. Every employee is also positively feedback motivated for his qualification improvement, which also gives additional value gain for him and everyone in this struct.

For more details check it in our Economics StackExchange discussion.

Growth factors and optimization

So in general we have a basic low skilled small portions of value gain to build some initial assets and start some very small sales with intent to grow. To get an increase in quality of our product and thus sales profits we need to attract some skilled professionals.

They don’t need to get training and need some sales shares as a reward. The profitability of the overall portfolio of software assets sales (S) depends on qualification (Q) and amount of work (W), let’s say it is proportional (S = k1*Q*W). So to promote and attract experts we can simply give them some kind of internal fictional currency as shares (lets say just some creds). When an expert is performing work, he is gaining some creds for completion of small required portion.

After some period (two weeks, month) the overall sales (in real values) for period are then interchanged and shared according to the creds amount. The only problem is that no one is trusting anyone, but we can easily handle it with 100% shared data on sales and efficiency of products (let’s say it is available only after some creds limit gained to avoid fraud).

Economic Model structure

Let’s get back to our simple model again. In order not to get to any specific tech complexities we are planning to create an organization to create high quality software in the ever charging market of computers/communications/applied devices. Suppose we don’t have any money or motivating resource, but there are lots of work force distributed around. Also we are not planning to get any credits and start with zero balance to achieve some value gain. The plan is to motivate workers and create self-growing enterprise to create, promote and sell software.

In order to achieve some initial growth we need to hire some kind of low skilled work force for training. They irregularly can make some kind of small work portions that help us to create a starting basic products. To make things clear let’s assume we have to perform only 3 basic activities – create, sell and maintain our software product. So our workforce primarily consists of sales and developers and we can get initial value gain from that based on training employees.

Economic environment

Let’s imagine an ideal market with minimal state regulations and high risk quite uncertain venture investments. To get a simple model lets say it is a specific currently almost indefinitely evolving economic branch with very high human engaged activities requiring as much qualification as possible. It is moderated at the society level (just like common good or commonweal for everyone) reasonably and all contracts are fulfilled with 100% guarantee.

In real world these simplifications may apply to some kind high level scientific research, advanced medicine, experimental technologies, hi-tech capabilities and that kind of staff.

So as currently observed in most current states the organizations don’t tend to evolve and keep growing to some stable size and then go to some kind of decay or deprecation.

The question is – is it possible and how to make a self-growing and self-evolving organization in these conditions ? Are there some fundamental restrictions for these kind of economical structures and what are the major restricting factors ?

There are some ideas on that for this topic in our Economics StackExchange discussion.

ReUse.Net Solution structure

Here are some details on ReUse Net Solution structure.

  • Base folder – contains base code, same for all c-like languages (Java, C#, C++)
  • Common folder – contains .NET common code, same for all .NET languages (C#, F#, VisualBasic)
  • Utilities folder – contains .NET common code
  • Common folder – contains .NET common code

There some base common code structures available in all ReUse frameworks :

  • _ – common apps and code data and utilities static class
  • Cx – common apps/code execution context, containing logging data
  • Mx – common code methods execution context, defining code launch type – try/catch, measure performance
  • f<…> – common functions delegates with multiple arguments
  • v<…> – common void functions delegates with multiple arguments
  • c<…> – common class union structures with multiple types
  • s<…> – common functions delegates with multiple arguments

ReUse.Net Specification

ReUse_Net is a common .NET code framework project, written in C#. It can be compiled using Visual Studio 2010-2019 and is set to use .Net version 4.0 and above.

Since it is designed to be really platform/architecture independent and highly portable it will be easy to port it to .NET Standard and Core versions.

The idea about this framework is to create simple reusable code blocks, both common an specific to current platform/language type.

Using .NET we trying to heavy use delegates, lambda methods and linq queries. All the methods are applied as extensions methods (using this notation and very useful quick type code functions).

Take a look at official documentation.

Welcome to ReUse.Net

An example of common ReUse concept with widely used .NET languages family.

Well designed, ReUse.Net is a common collaborative code framework with simple main principles : heavy focus on typical code reuse, ultra minimalistic, lightweight and highly efficient, extendable.

Since most recent programming languages open source codes are not well designed, we decided to make a Well designed common collaborative code frameworks with simple main principles :

  • heavy focus on typical code reuse
  • designed to be a common code library
  • ultra minimalistic standard names
  • well defined comments
  • allow to use extensions based on core framework
  • lightweight and highly efficient
  • standard common languages base types support with extensions to custom language types

For more details see code reuse concept.

New ReUse common code project started

Since most recent programming languages open source codes are not well designed, we decided to make a common code frameworks with simple main principles :

  • heavy focus on typical code reuse
  • designed to be a common code library
  • ultra minimalistic standard names
  • well defined comments
  • allow to use extensions based on core framework
  • lightweight and highly efficient
  • standard common languages base types support with extensions to custom language types

Take a look and discuss it in our new GitHub ReUse project.

The correlation between scales of time and space

I guess it is quite well known that there is an empirical correlation between scales of time and space.

That means that small things are usually move / rotate relatively (compared to its sizes) faster then the large ones.

For example ants move relatively really fast, compared to large animals.
The bacterias are much much faster.
Atoms and electrons live in quite short times, in about 10e-7 seconds scale.

On the other side of scales planets move relatively slow (day or year), stars and galaxies are quite static for our time scale.

In general it looks like the scale of time is almost proportional to the scale of space.

How to explain that using standard physics ideas ?

Is it a kind of inertia properties or something like that ?

For more discussions on this topic – take a look at stack physics

Common Javascript alerts methods

So here are some common javascript message alerts methods (common texts, variable values and lengths) widely used for testing and debugging.

There are also some useful methods with displaying alerts on condition and with returning values.

// Common javascript alert methods
var C = {
    A: function (AlertMessage) {
    alert(AlertMessage);
},
Av: function (AlertMessage, AlertValue) {
    alert(AlertMessage + " = " + AlertValue);
},
Ar: function (ReturnValue, AlertMessage) {
    alert(AlertMessage);
    return ReturnValue;
},
Al: function (AlertMessage, AlertValueLength) {
    alert(AlertMessage + " = " + AlertValueLength.length);
},
Ai: function (AlertOnTrueCondition, AlertMessage, AlertValue) {
    if (!AlertOnTrueCondition)
        return;

    if (AlertValue !== undefined)
        alert(AlertMessage + " = " + AlertValue);
    else
        alert(AlertMessage);
}
};

Usage is very simple :

C.A("My test message");  // alert message
C.Al("My val length", MyVal);  // display MyVal length
C.Av("My val value", MyVal);  // display MyVal value
return C.Ar(MyVal, "My Test message");  //  display message and return MyVal