I noticed something different
Between ‘modern probability theory’
And ‘calculus of plausible inference’.
Suppose you want to make a robot.
Suppose you want it free of prejudice.
Then you must make that robot
So it never adds a new assumption.
It happens often that some quantity
Shows up in your expressions
But what you know about that quantity
Amounts to less than a hill of beans.
A robot, faced with this situation,
And programmed according to the ‘moderns’,
Must come to you for more instructions,
To be given a density for the quantity.
The human must make a declaration,
‘Assume that X is uniformly distributed’,
Which the robot on its own cannot do,
Because it never makes assumptions.
In calculus of plausible inference, though,
The robots are commanded as follows:
‘You must never neglect what you know,
And you must never disagree with your fellow.’
Thus the robot must choose a density
That every other robot would have chosen,
Which always gives the same conclusion,
No matter how you integrate, etc.
Of course it ends up the same function,
Just forced by logic instead of assumed,
A little weightier in human judgment,
And keeping robots off our lazy backs.