|
Post by xanet on Sept 13, 2017 16:22:49 GMT -5
z = (1.82 - 1.23)/(1.114/(29 .5)) = 2.8521 2.8521 -> .9978 1 - .9978 = .0022 is our p value You have to adjust for sample size and variance. I ran the ANOVA based on the summary statistics and it was significant (p = 0.01), so I ran post-hoc tests (Tukey's HSD) to separate means. I found no significant difference between the two Afrezza treatments but there was a significant difference between either Afrezza treatment and the Met & Sec treatment (p = .03 in either case). Cheers!
|
|
|
Post by dreamboatcruise on Sept 13, 2017 18:32:55 GMT -5
derek2 ... interestingly, a graph of mortality for semiconductor parts has the same type of bimodal distribution. Probably air conditioners as well, peppy. [edit: though with air conditioners it isn't purely a statistical distribution function as they are evil creatures that like to die when they can inflict the most suffering on their owners... the hottest weekend of the year] grumble, grumble
|
|
|
Post by contrastock on Sept 13, 2017 18:48:10 GMT -5
z = (1.82 - 1.23)/(1.114/(29 .5)) = 2.8521 2.8521 -> .9978 1 - .9978 = .0022 is our p value You have to adjust for sample size and variance. I ran the ANOVA based on the summary statistics and it was significant (p = 0.01), so I ran post-hoc tests (Tukey's HSD) to separate means. I found no significant difference between the two Afrezza treatments but there was a significant difference between either Afrezza treatment and the Met & Sec treatment (p = .03 in either case). Cheers! Great! I was planning on making some adjustments to get it more accurate but I like your method a lot more than what I was thinking.
|
|
|
Post by mnkdfann on Sept 13, 2017 18:55:49 GMT -5
Using R: > t.test2 <- function(m1,m2,s1,s2,n1,n2,m0=0,equal.variance=FALSE) + { + if( equal.variance==FALSE ) + { + se <- sqrt( (s1^2/n1) + (s2^2/n2) ) + # welch-satterthwaite df + df <- ( (s1^2/n1 + s2^2/n2)^2 )/( (s1^2/n1)^2/(n1-1) + (s2^2/n2)^2/(n2-1) ) + } else + { + # pooled standard deviation, scaled by the sample sizes + se <- sqrt( (1/n1 + 1/n2) * ((n1-1)*s1^2 + (n2-1)*s2^2)/(n1+n2-2) ) + df <- n1+n2-2 + } + t <- (m1-m2-m0)/se + dat <- c(m1-m2, se, t, 2*pt(-abs(t),df)) + names(dat) <- c("Difference of means", "Std Error", "t", "p-value") + return(dat) + } > t.test2( 1.82, 1.23, 1.114, 1.080, 29, 72, m0=0 ) Difference of means Std Error t p-value 0.59000000 0.24288468 2.42913638 0.01874368 Credit to: stats.stackexchange.com/questions/30394/how-to-perform-two-sample-t-tests-in-r-by-inputting-sample-statistics-rather-tha
|
|
|
Post by zzhoskins on Sept 13, 2017 20:51:13 GMT -5
z = (1.82 - 1.23)/(1.114/(29 .5)) = 2.8521 2.8521 -> .9978 1 - .9978 = .0022 is our p value You have to adjust for sample size and variance. I ran the ANOVA based on the summary statistics and it was significant (p = 0.01), so I ran post-hoc tests (Tukey's HSD) to separate means. I found no significant difference between the two Afrezza treatments but there was a significant difference between either Afrezza treatment and the Met & Sec treatment (p = .03 in either case). Cheers! Appreciate your efforts. Is there a way to do anything similar with the data that, presumably, was submitted to the FDA for the label change:
www.mannkindcorp.com/assets/Baughman-2016-TI-displays-earlier-onset-and-shorter-duration-than-insulin-lispro-ADA-100-LB.pdf
|
|
|
Post by derek2 on Sept 14, 2017 6:09:19 GMT -5
derek2 ... interestingly, a graph of mortality for semiconductor parts has the same type of bimodal distribution. Probably air conditioners as well, peppy. [edit: though with air conditioners it isn't purely a statistical distribution function as they are evil creatures that like to die when they can inflict the most suffering on their owners... the hottest weekend of the year] grumble, grumble Also known as a tub chart. Stuff (or people) fails right off the bat, or works for an expected duty cycle before failure. On the human side, that's known as under-5 mortality. In the mid-70's. 60,000 children died PER DAY from preventable causes. Today, with >2 times the population, it's 20,000 per day. From a rate perspective, that's 5/6 of the way there, if your goal is to eliminate that terrible toll. So a little positive news,there.
|
|
|
Post by xanet on Sept 14, 2017 15:54:29 GMT -5
You have to adjust for sample size and variance. I ran the ANOVA based on the summary statistics and it was significant (p = 0.01), so I ran post-hoc tests (Tukey's HSD) to separate means. I found no significant difference between the two Afrezza treatments but there was a significant difference between either Afrezza treatment and the Met & Sec treatment (p = .03 in either case). Cheers! Appreciate your efforts. Is there a way to do anything similar with the data that, presumably, was submitted to the FDA for the label change:
www.mannkindcorp.com/assets/Baughman-2016-TI-displays-earlier-onset-and-shorter-duration-than-insulin-lispro-ADA-100-LB.pdf
Yes for things like response times, but I wasn't clear exactly how many patients got each treatment (was it all 30?). I need the mean, standard deviation and sample size for each treatment to calculate it. And I will be out of the country until Monday, so maybe someone else can take a crack at it. I also don't do work in human subjects, so I'm not familiar with these types of studies. I do plant research.
|
|
|
Post by xanet on Sept 14, 2017 16:07:45 GMT -5
> t.test2( 1.82, 1.23, 1.114, 1.080, 29, 72, m0=0 ) Difference of means Std Error t p-value 0.59000000 0.24288468 2.42913638 0.01874368 Yes, that's another way to do it. The t-test is more powerful (lower p-value, more likely to detect a difference) than ANOVA, but only allows comparison between two treatments. ANOVA allows us to compare all of the effects of treatments against each other. If the p < 0.5, then we can run additional tests that allow us to test the effects of each treatment against each other treatment for separate p-values. Tukey's allows us to adjust the p-values for multiple comparisons, to prevent false positives that are really the result of random chance.
|
|
|
Post by dreamboatcruise on Sept 18, 2017 22:37:26 GMT -5
If I could I think I'd switch my vote to delayed from early. Just getting a feeling the ultra-rapid may be something they'll want more time to consider. Wish I didn't have this feeling. Hopefully karma will want to prove me wrong for switching my vote.
|
|
|
Post by goyocafe on Sept 18, 2017 23:07:21 GMT -5
If I could I think I'd switch my vote to delayed from early. Just getting a feeling the ultra-rapid may be something they'll want more time to consider. Wish I didn't have this feeling. Hopefully karma will want to prove me wrong for switching my vote. You have to wonder when the FDA actually starts deliberations on these reviews when 10 months pass and they come back at the last minute and say they need more time. What were the doing for the past 10 months? Hopefully they'll come through. There's that word again. 😏
|
|
|
Post by wiscdh on Sept 22, 2017 11:17:16 GMT -5
I wonder if you ran the same poll today if people would change their vote? I voted that the FDA would make their decision on the 29th but now I would change my vote to delayed.
|
|
|
Post by alethea on Sept 22, 2017 11:20:28 GMT -5
I wonder if you ran the same poll today if people would change their vote? I voted that the FDA would make their decision on the 29th but now I would change my vote to delayed. Me too. Exactly. FDA's ethics and scruples are no better than Wall Street's.
|
|
|
Post by sportsrancho on Sept 22, 2017 12:25:08 GMT -5
We get a PR on Monday morning Oct 1st is my guess:-)
|
|
|
Post by sla55 on Sept 22, 2017 12:54:06 GMT -5
We get a PR on Monday morning Oct 1st is my guess:-) My guess is it will be on Monday morning Oct 2nd.
|
|
|
Post by mytakeonit on Sept 22, 2017 13:01:43 GMT -5
sla - I noticed that also ... BUT, my mommy didn't raise me to be an idiot to point it out to a screaming, clawing sports.
|
|