# Truncating the millisecond digit



## obelisk477 (Dec 9, 2014)

My question is, now that we have stackmats with the ability to accurately time down to the millisecond, why are we still truncating the digit and using the rest as the official time? I could see how this is relatively unimportant for solves greater than ~10 seconds, but with as people are gradually getting faster, it would seem to make sense at the very least to consider keeping the digit for solves times faster than ~10 seconds. I guess a 'scientific' argument would be that since we track 'slower' solves to 4 significant digits of accuracy (10.02) , this ought to be the case with faster solves as well (9.998).

I suppose the short answer is that not everyone has the gen-whatever stackmats with the capability to do this, but even so, shouldn't we be considering this for the future when most timers will be able to do so?


----------



## Username (Dec 9, 2014)

I think one of the problems is implementing the third digit times into the current rankings. Imagine someone fast getting a 5.554 3x3 single, is it WR or not?

But honestly I don't know, I see your point


----------



## JasonDL13 (Dec 9, 2014)

I agree with you. There are some ties in the WR's.

I guess the only way to put it in current rankings is to set the next digit to 9.
For the record: MultiBLD doesn't track milliseconds, but it's still ranked as 99.


----------



## obelisk477 (Dec 9, 2014)

JasonDL13 said:


> I agree with you. There are some ties in the WR's.
> 
> I guess the only way to put it in current rankings is to set the next digit to 9.
> For the record: MultiBLD doesn't track milliseconds, but it's still ranked as 99.



Well mathematically, it seems most fair to set the millisecond digit for the existing sub 10s to be 5. It wouldn't change the current rankings, and is in fact most likely the 'average' digit that was truncated.


----------



## Dene (Dec 9, 2014)

obelisk477 said:


> My question is, now that we have stackmats with the ability to accurately time down to the millisecond, why are we still truncating the digit and using the rest as the official time?



Here's the important question: _Can_ we accurately time down to the millisecond?

Think about it.


----------



## Stefan (Dec 9, 2014)

JasonDL13 said:


> For the record: MultiBLD doesn't track milliseconds, but it's still ranked as 99.



What do you mean?


----------



## goodatthis (Dec 9, 2014)

Personally I have always thought of significant figures as relatively useless. For IAAF WRs, 9.58 has to be and is every bit as accurate as 26:17.53. (If not more accurate from a practical standpoint)

Because with 9.999 (pro timer) vs 10.00 (gen 2), how are they equally as accurate? They aren't. We don't know the thousandths place digit for the 10.00. So from a standpoint of sigfigs, they really don't determine accuracy.

And in reality, as long as we truncate, and not round, we are eliminating the need for extra accuracy. Gen 2 timers truncate, and if we just truncate the results from protimers as well, it's fine. 

I think the best thing to do is to keep the data from the millisecond as metadata, and not displayed, and in the case of a tie in the database, solves done with a gen 2 will have a 9 in the milliseconds place to assume the worst case scenario, and have solves done with a protimer have the true value.



Stefan said:


> What do you mean?



I think he means if someone did a 30:00 multi attempt, it's considered 30:00.999 (which is pointless, not to mention unsubstantiated anyway since there's no mixed results in multi.) a better example of this "worst case scenario" reasoning that is applicable to mixed results is if someone doing data entry cannot determine whether the time is 2:37.22 or 2:31.22 on a scorecard, they are supposed to pick the worst possible time.


----------



## Chree (Dec 9, 2014)

There's also the cost to upgrade to consider. Lots of groups that organize competitions are still using Gen2's. Enforcing a transition to Gen3 timers would be a costly move.

And if you don't force people to use Gen3 timers, would you then say that groups still using Gen2s have a competitive advantage because their scores are automatically assuming the millisecond is Zero?


----------



## Lucas Garron (Dec 9, 2014)

People still use Gen 2 timers. We don't want to make people stop using them because 1) that costs money, and 2) there are definitely fewer issues with Gen 2 timers (e.g. timer resets), so they are kind of preferable.
Our past results use hundredths. I don't want to be in charge of debating a transition plan.
Milliseconds just let you break a few ties that arguably don't need to be broken. We'd still need to handle (rarer) ties, but if two people are tied to hundredths, I find it hard to argue that we need a better way of distinguishing one over the other.
Stackmat timers are known to be biased to certain results. If we started recording millisecond results, that wouldn't mean we have millisecond accuracy.


----------



## obelisk477 (Dec 9, 2014)

Dene said:


> Here's the important question: _Can_ we accurately time down to the millisecond?
> 
> Think about it.



I am thinking about it and am coming up with nothing. I don't know. Can we? Are you saying that the electronics are incapable of doing so, or something else?


----------



## tseitsei (Dec 9, 2014)

obelisk477 said:


> I am thinking about it and am coming up with nothing. I don't know. Can we? Are you saying that the electronics are incapable of doing so, or something else?



Stackmat timers are incapable of doing so. They are even biased towards some results even in the centiseconds range.

Millisecond is quite a short time and is not that easy to measure accurately. (well obviously with right expensive equipment one can achieve much greater accuracy even but not with stackmat timers)


----------



## Stefan (Dec 9, 2014)

tseitsei said:


> Stackmat timers are incapable of doing so.



How do you know?


----------



## tseitsei (Dec 9, 2014)

Stefan said:


> How do you know?



They have a bias towards certain results even in the hundreths of second don't they?
So doesn't that mean they are not actually absolutely accurate even in that range?


----------



## Stefan (Dec 9, 2014)

tseitsei said:


> They have a bias towards certain results even in the hundreths of second don't they?
> So doesn't that mean they are not actually absolutely accurate even in that range?



Gen 2, yes. That's well-known. But how do you know it about the Pro timers (or Gen 1 or Gen 3)?


----------



## tseitsei (Dec 9, 2014)

Stefan said:


> Gen 2, yes. That's well-known. But how do you know it about the Pro timers (or Gen 1 or Gen 3)?


I don't know if that is true for other than gen2 timers but a lot of comps still use gen2 timers. 

No need to make people buy new equipment just to be able to add an extra digit to the results IMO


----------



## Kit Clement (Dec 9, 2014)

Well, why don't we all get out our Pro timers and start testing the distribution?


----------



## Lucas Garron (Dec 10, 2014)

Kit Clement said:


> Well, why don't we all get out our Pro timers and start testing the distribution?



I have a personal TODO to calculate distributions for US Nationals 2014, since we know every attempt used a Pro timer. I'd be glad if someone else does the stats for this, though.


----------



## Stefan (Dec 10, 2014)

Lucas Garron said:


> I have a personal TODO to calculate distributions for US Nationals 2014, since we know every attempt used a Pro timer. I'd be glad if someone else does the stats for this, though.



Do we have it with milliseconds?


----------



## JasonDL13 (Dec 10, 2014)

Stefan said:


> What do you mean?



If you look in the WCA database (the mysql stuff) all of the MBLD stats are MM:SS.99


----------



## Stefan (Dec 10, 2014)

A while back I analyzed the known "gap times" and found that WC2013, which I think also only used pro timers, got close to the expected ~40%:
https://www.speedsolving.com/forum/...request-Thread&p=939478&viewfull=1#post939478

Here's the centiseconds part distribution of US Nationals 2014 (I hope it's clear what I mean, like the first row meaning 117 solves were m:ss.56 long, according to the WCA database). I don't know how to judge it, though.


Spoiler: Centiseconds distribution at US Nationals 2014



Using data from WCA_export536_20141209 and Stefan's WCA Statistics Tools.


*centiseconds**solves*5611744125371276132701322913221133511354613553137991374013822138821393114033140421404814011141121413514178141621424114317143881436414331439814368145571454714528145521469014671479614721476314787148191488114859148251499414916149691496614955150831503415154151381514151761527915285152921521152391524315236152015280153231532615551559115571158951585015861158301601816167161741618916172163271637716460164321658166141664916620167861686516824168841699171151721317258174451741017793180731829718375183



Spoiler: SQL





```
[NOPARSE]SELECT v mod 100 centiseconds, count(*) solves
FROM
((SELECT value1 v, eventId FROM Results WHERE competitionId='USNationals2014') UNION ALL
(SELECT value2 v, eventId FROM Results WHERE competitionId='USNationals2014') UNION ALL
(SELECT value3 v, eventId FROM Results WHERE competitionId='USNationals2014') UNION ALL
(SELECT value4 v, eventId FROM Results WHERE competitionId='USNationals2014') UNION ALL
(SELECT value5 v, eventId FROM Results WHERE competitionId='USNationals2014')) tmp
join Events on id = eventId
WHERE format = 'time' AND v > 0 AND v < 10*60*100
GROUP BY v mod 100
ORDER BY 2;[/NOPARSE]
```


----------



## Stefan (Dec 10, 2014)

JasonDL13 said:


> If you look in the WCA database (the mysql stuff) all of the MBLD stats are MM:SS.99



No, I don't see anything like that. Where exactly?


----------



## JasonDL13 (Dec 10, 2014)

Stefan said:


> No, I don't see anything like that. Where exactly?



Huh. After checking I was wrong. I remember reading the mysql file and seeing lots of results ending in 99.

Sorry.


----------



## goodatthis (Dec 10, 2014)

Is there any way to check distribution on Stackmat timers without a brute force data approach?

example: finding probabilities of PLLs by calculating symmetry instead of solving the cube 1000 times and seeing what PLL you get


----------



## Lucas Garron (Dec 10, 2014)

Stefan said:


> A while back I analyzed the known "gap times" and found that WC2013, which I think also only used pro timers, got close to the expected ~40%:
> https://www.speedsolving.com/forum/...request-Thread&p=939478&viewfull=1#post939478
> 
> Here's the centiseconds part distribution of US Nationals 2014 (I hope it's clear what I mean, like the first row meaning 117 solves were mm:ss.56 long, according to the WCA database). I don't know how to judge it, though.



I expect the centiseconds to be smoothed out mod 100. What if you take raw times under 20 seconds and look at the distribution? (Maybe combine both Worlds and Nats.)


----------



## Dene (Dec 10, 2014)

obelisk477 said:


> I am thinking about it and am coming up with nothing. I don't know. Can we? Are you saying that the electronics are incapable of doing so, or something else?



I don't feel like writing out a bunch of arguments again, so I'm mostly going to copy/paste from a very long discussion amongst the delegates a year ago that I just went reading through (I surprise myself with the brilliance of my arguments). Bear in mind what I wrote before was in response to something, but I'm editing it to get to the relevant stuff.


The issue is not that we don't trust the timers in general; I'm sure each individual timer that we use (unless defective) will measure the time fairly accurately, and consistently within itself. The big question is, to what degree is the timer accurate in the sense that it gives a time that would be true in a perfect world (and every timer would give the same result)? As the swimming pool example shows, while we could use hyper sensitive laser technology to measure time, there are extraneous factors that limit accuracy (such as the size of the swimming pool itself). As I proposed above, one extraneous factor might be the sensitivity of the pads which determine when the clock starts and stops. I recall vividly a situation when I was in California, and there was a gen2 timer that would "activate" the pads without physical contact between the hands and the pads. Dan Dzoan was placing his hands maybe up to 2mm or 3mm away from the pads and the clock could be started and stopped. Other timers require a degree of pressure to placed on the pads for the clock to be started and stopped. So the accuracy of the timer is limited due to extraneous factors.

Therefore I believe it is reasonable to assume that if 10 different gen3 timers were started and stopped in exactly the same way for exactly the same period of time (let's pretend this experiment is controlled by robots to ensure each timer is done exactly the same) we might expect a variation in the times that are displayed at the end of it (e.g. 10.106, 10.108, 10.110, 10.102, 10.106 etc.)

My example times have biased this next statement, but I don't think it's unreasonable. Essentially, the biggest variation would surely be in the thousandths of a second. There might be a bit of variation in the hundredths of a second but this would be significantly less, and perhaps significant to an extent that it is acceptable. But surely the variation in the thousandths of a second is unacceptable, and this is why we should never use thousandths of a second for official purposes (just like it seems this is what has been decided for other sports).

Speaking of other sports, it should be noted that I've said a bunch of stuff with absolutely no real evidence to back it up. However the technology to measure to thousandths of a second has been around for decades and I am 99.99999% sure this has been tested in other sports, especially Olympic sports. Obviously times are still measured in hundredths of a second, so it must have been found that this the most acceptable option. I believe we should follow suit.



Someone responded with using Formula 1 as an example of a sport where thousandths are used. Essentially my counter-argument was that Formula 1 isn't comparable as people are mostly racing against each other, not the clock.

I also want to point out I am not against the idea of scrapping hundredths of a second as well.


----------



## Stefan (Dec 10, 2014)

Lucas Garron said:


> I expect the centiseconds to be smoothed out mod 100.



How "smoothed out"?

Here's an experiment, testing the same number of random centiseconds:
http://ideone.com/l5oHhM
If you run it again, you'll get different frequencies, but they always look similar to the US 2014 distribution.

Trying it 1000 times, the average minimum frequency was 121.993 and the average maximum frequency was 183.888:
http://ideone.com/wouH5n (R is awesome)



Lucas Garron said:


> What if you take raw times under 20 seconds and look at the distribution?



Do you think that would be useful? I'd expect biases towards the average 2x2 time, for example. I also don't know how to visualize it well.


----------



## Rainbow Flash (Dec 10, 2014)

obelisk477 said:


> I am thinking about it and am coming up with nothing. I don't know. Can we? Are you saying that the electronics are incapable of doing so, or something else?



Well here's the deal. Here are the top two world records for a single 3x3 solve:

5.55 seconds by Mats Valk
5.66 seconds by Feliks Zemdegs

The difference is 0.11 seconds. That's about 1 or 2 turns I guess, unless I'm very much mistaken.

Well...Dene said think about it. What if Zemdegs took 0.056 seconds longer than Valk to pick up the cube. Then when he dropped it his hands were slightly higher than Valk, and wasted another 0.056 seconds. So his solve was also 5.55 seconds but the thing that stopped him from tying with Valk is the tenth of a second that he wasted in picking up and dropping the cube. This is just an example, but Valk also had to pick up his cube too which makes it even more complicating. Anyway, that's thinking about it.

Honestly I think adding the millisecond is not going to do much. If the hundredth of a second isn't going to be super accurate anyway, how much less the millisecond!

(I actually think it's best to stick to the tenth of a second, hehe...)

Rainbow Flash


----------



## Stefan (Dec 10, 2014)

Another test with the centisecond frequencies (at WC2013, m:ss.00 appeared 153 times, etc), although I don't really know what I'm doing and what it means. Mainly just hoping it's useful for someone who knows what to do:
http://ideone.com/tarFvQ


----------



## Dene (Dec 10, 2014)

Rainbow Flash said:


> Well...Dene said think about it. What if Zemdegs took 0.056 seconds longer than Valk to pick up the cube. Then when he dropped it his hands were slightly higher than Valk, and wasted another 0.056 seconds. So his solve was also 5.55 seconds but the thing that stopped him from tying with Valk is the tenth of a second that he wasted in picking up and dropping the cube. This is just an example, but Valk also had to pick up his cube too which makes it even more complicating. Anyway, that's thinking about it.



This doesn't mean anything. If Feliks takes too long to pick up the cube that's his problem. 

On the other hand, if Feliks had a timer with slightly less sensitivity, forcing him to press harder on the timer to start it, then press harder to stop it, that would definitely be unfair.


----------



## Rainbow Flash (Dec 11, 2014)

Dene said:


> This doesn't mean anything. If Feliks takes too long to pick up the cube that's his problem.
> 
> On the other hand, if Feliks had a timer with slightly less sensitivity, forcing him to press harder on the timer to start it, then press harder to stop it, that would definitely be unfair.



Well yes, I guess you're right Dene. But then what we're measuring now is not just how fast a cuber can cube, but how long it takes for him to pick up and drop the cube. The second problem is like what you said, with the differences across timers.

I honestly think the hundredths should be cut out, to blur the results of unfair factors. But still the WCA will keep the hundredths of seconds for the sake of keeping cubers from tying. This world is always looking for a winner to shame everyone else.

*EDIT*: OK not shame everyone else - I meant the human race is always looking for an idol. Anyway that's unrelated...

(I also made a stupid maths mistake. Half of 0.11 is 0.055 not 0.056.)


----------



## Dene (Dec 11, 2014)

Rainbow Flash said:


> Well yes, I guess you're right Dene. But then what we're measuring now is not just how fast a cuber can cube, but how long it takes for him to pick up and drop the cube. The second problem is like what you said, with the differences across timers.



Yes, this is an unfortunate problem inherent in how we time ourselves. Without the appropriate technology, it has to be this way.


----------



## Stefan (Dec 11, 2014)

Rainbow Flash said:


> WCA will keep the hundredths of seconds for the sake of keeping cubers from tying. This world is always looking for a winner to shame everyone else.



Why do you only advocate cutting centiseconds, not also deciseconds? You want to shame people?!


----------



## Rainbow Flash (Dec 11, 2014)

Stefan said:


> Why do you only advocate cutting centiseconds, not also deciseconds? You want to shame people?!



Hehe...I shouldn't have said that, and now _I've _been shamed...*sob, sob*...anyway.


----------



## Stefan (Dec 11, 2014)

In my opinion, we're keeping centiseconds not for tie-prevention or for shaming or for idolizing, but for accuracy (we have measurements in centiseconds, there's no good reason to throw them away, and many other sports use them as well).



Rainbow Flash said:


> I honestly think the hundredths should be cut out, to blur the results of unfair factors.



So if two guys "actually do" 5.97 but get measured 5.99 and 6.00 seconds due to unfair factors, you want to make them 5.9 and 6.0, and that's blurring?


----------



## Stefan (Dec 11, 2014)

My speedstacks vs qj comparison measurements might be interesting again, to show that centiseconds measuring appears to be quite accurate:
https://www.youtube.com/watch?v=qhQlkeJvVvI#t=47
Watch 0:47 to 2:10, and note that all three 0.01 seconds discrepencies can be attributed to that speedstacks timer's bug of not being capable to get the result that the qj timer got (see here).

My results as text, though watch the video to see how I measured:

```
1.05 vs    1.05
   0.93 vs    0.93
   1.[U]40[/U] vs    1.[U]39[/U]     +0.01 difference, attributable to speedstacks bug
   1.52 vs    1.52
   1.13 vs    1.13
   1.3[U]1[/U] vs    1.3[U]2[/U]     -0.01 difference, attributable to speedstacks bug
   1.69 vs    1.69
   0.08 vs    0.08
  15.6[U]1[/U] vs   15.6[U]2[/U]     -0.01 difference, attributable to speedstacks bug
1: 9.81 vs 1:09.81
9:59.2[U]2[/U] vs 9:59.2[U]6[/U]     -0.04 difference, don't know why
9:59.4[U]3[/U] vs 9:59.4[U]6[/U]     -0.03 difference, don't know why
9:59.4[U]4[/U] vs 9:59.4[U]8[/U]     -0.04 difference, don't know why
9:59.5[U]2[/U] vs 9:59.5[U]4[/U]     -0.02 difference, don't know why
```

What I hadn't realized until now: If I "account for" the speedstacks bug by changing some speedstacks results by 0.01 to results that it can't actually get, then it's like this:


```
1.05 vs    1.05     equal
   0.93 vs    0.93     equal
   1.39 vs    1.39     equal
   1.52 vs    1.52     equal
   1.13 vs    1.13     equal
   1.32 vs    1.32     equal
   1.69 vs    1.69     equal
   0.08 vs    0.08     equal
  15.62 vs   15.62     equal
1: 9.81 vs 1:09.81     equal
9:59.2[U]3[/U] vs 9:59.2[U]6[/U]     -0.03 difference, don't know why
9:59.4[U]3[/U] vs 9:59.4[U]6[/U]     -0.03 difference, don't know why
9:59.4[U]5[/U] vs 9:59.4[U]8[/U]     -0.03 difference, don't know why
9:59.5[U]1[/U] vs 9:59.5[U]4[/U]     -0.03 difference, don't know why
```

Note that all the "small" times are equal now and that all the close-to-max times differ by the same amount. So I think I was able to accurately measure centiseconds, and the differences are only because of the old speedstacks bug and because qj and speedstacks have slightly different speeds (0.03 seconds per 10 minutes, so still much better than decisecond accuracy even for the largest times we record in centiseconds).

Until now, I had thought the 0.01 fluctuations were my fault, because I didn't remove my fingers at the same time from both timers or didn't put them back at the same time. Or also due to the timers having different sensitivity. But now I think it was mainly the speedstacks bug. Think about this: You can probably move your hand 1 meter in about 0.1 seconds, and accelerate your fingers from the timer pad to that speed almost instantly. So even the 3mm that Dene mentioned about Dan only account for about 0.3 milliseconds (and I do mean milliseconds, not centiseconds). Thus even this extreme example is doubtful as argument even against milliseconds, let alone against centiseconds.

Could people with two pro timers please repeat my comparison experiment with them? I suspect that the example variations Dene guessed for accurate robot-controlled timing would not occur, and might not be as bad even for human-controlled timing.


----------



## Stefan (Dec 11, 2014)

Btw, Natan previously tested one pro timer, trying 777 times to get 2 seconds. He managed to get every time between 1.912 and 2.050, so looks like there are no impossible-to-get times. And I then tested his centisecond and millisecond distributions for anomalies and they looked fine. I'm not aware of any tests comparing two pro timers (or pro timer with gen2 or qj), so that would still be interesting.



Spoiler: My (non-)findings



I did some tests, everything looked ok to me. Here's the distribution
of the last digit (milliseconds):

--- Natan from 1.767 to 2.177 ---
1 appeared 58 times
6 appeared 67 times
8 appeared 72 times
0 appeared 77 times
9 appeared 78 times
2 appeared 81 times
5 appeared 82 times
7 appeared 86 times
4 appeared 87 times
3 appeared 89 times
---
average: 77.7 min=58 max=89

--- random.org (same number of samples) ---
9 appeared 63 times
4 appeared 70 times
5 appeared 71 times
3 appeared 78 times
7 appeared 79 times
0 appeared 80 times
2 appeared 83 times
6 appeared 84 times
8 appeared 84 times
1 appeared 85 times
---
average: 77.7 min=63 max=85

-------------------------------------------------------------------------------

Here's the centiseconds digit:

--- Natan from 1.767 to 2.177 ---
7 appeared 65 times
4 appeared 71 times
6 appeared 73 times
0 appeared 75 times
1 appeared 76 times
8 appeared 77 times
9 appeared 79 times
5 appeared 86 times
2 appeared 87 times
3 appeared 88 times
---
average: 77.7 min=65 max=88

--- random.org (same number of samples) ---
1 appeared 67 times
2 appeared 69 times
6 appeared 70 times
8 appeared 70 times
9 appeared 75 times
3 appeared 79 times
4 appeared 82 times
0 appeared 85 times
5 appeared 88 times
7 appeared 92 times
---
average: 77.7 min=67 max=92

-------------------------------------------------------------------------------

Centiseconds and milliseconds together:

--- Natan from 1.767 to 2.177 ---
61 appeared 2 times
02 appeared 3 times
92 appeared 3 times
...
08 appeared 14 times
62 appeared 14 times
94 appeared 17 times
---
average: 7.77 min=2 max=17

--- random.org (same number of samples) ---
97 appeared 2 times
17 appeared 3 times
69 appeared 3 times
...
52 appeared 14 times
51 appeared 15 times
98 appeared 15 times
---
average: 7.77 min=2 max=15


----------



## Stefan (Dec 11, 2014)

Testing my Gen2 stackmat against my Nexus 4 smartphone with the ChronoPuzzle app:


```
100 Tests
 diff  raw adjusted
  0.0   26   48
 0.01   51   34
 0.02   17   12
 0.03    5    5
 0.04    1    1
averge raw diff:      0.0104
averge adjusted diff: 0.0077
```

Data and code here.

I tried 100 times, 26 times I got the same result on both timers, 77 times I was at most 0.01 off, etc. Worst difference was 0.04, and I only got that once. Average distance was 1.04 centiseconds. If you let me adjust the stackmat time to a neighboring impossible-to-get time, then 48 times the results were the same and 82 were at most 0.01 off and the average difference was 0.77 centiseconds.

So even with different technologies and my human controlling using differing finger positions and a known buggy timer, I pretty much got centisecond accuracy.

(I note that I only tested times around 2 seconds. So if the timers ran at slightly different speeds, this test wouldn't have noticed. But I only wanted to test the starting/stopping accuracy here, not the speed accuracy.)


----------



## Stefan (Dec 11, 2014)

Another test, this time plusTimer (which shows milliseconds) on my Nexus 4 vs plusTimer on my Nexus 7 (2012). In 20 attempts, the Nexus 4 time was *always* higher, on average by about 2.5 centiseconds (I prevented some bias by having the Nexus 4 on the left side half the time and on the right side half the time, and putting something under it so the surfaces were at about the same height). Then I repeated this with Nano Timer (shows centiseconds) and the same happened (20 attempts, Nexus 4 always higher, on average by about 3 centiseconds). Don't know whether it's a hardware or software issue (my Nexus 7 doesn't have Android 5 yet).


----------



## Rainbow Flash (Dec 12, 2014)

OK OK Stefan, you got me. Theories, I guess, will out-rule hypotheses in the end.



Stefan said:


> So if two guys "actually do" 5.97 but get measured 5.99 and 6.00 seconds due to unfair factors, you want to make them 5.9 and 6.0, and that's blurring?



Um no, 5.99 will get rounded up to 6.0, while 6.00 will lose a zero and become 6.0 - 6.0 and 6.0 - equal times.

Rainbow Flash (I hate my username)


----------



## Stefan (Dec 12, 2014)

Rainbow Flash said:


> Um no, 5.99 will get rounded up to 6.0, while 6.00 will lose a zero and become 6.0 - 6.0 and 6.0 - equal times.



Well if you mean round to nearest, then don't say "cut out". Especially not in the context of a thread called "Truncating...".

And obviously, for rounding to nearest the example is something like this:

So if two guys "actually do" 5.94 but get measured 5.94 and 5.95 seconds due to unfair factors, you want to make them 5.9 and 6.0, and that's blurring?


----------



## Rainbow Flash (Dec 12, 2014)

Stefan said:


> Well if you mean round to nearest, then don't say "cut out". Especially not in the context of a thread called "Truncating...".
> 
> And obviously, for rounding to nearest the example is something like this:
> 
> So if two guys "actually do" 5.94 but get measured 5.94 and 5.95 seconds due to unfair factors, you want to make them 5.9 and 6.0, and that's blurring?



*sigh*, Stefan, will you ever give up?! I do now...

(BTW I'm learning your Old Pochmann method this very moment...)


----------

