|
Post by dreamboatcruise on Dec 22, 2015 13:47:45 GMT -5
tommix321... yes, I've taken quite a bit of statistics as I have Ph.D. in engineering with one focus area being communication theory which is based on statistics... detecting signals in noise. I do not claim to be an expert in clinical trial design but I do know safety studies must be much longer and include more people because they are looking for the dangerous needles in the haystack. A trial for superiority is not going to be as big as the required FDA safety study, and I would guess it really could be done in 4-6 months and involving hundreds not thousands. It isn't unreasonable to look at other trials as you cite, but you should keep in mind that the trial would naturally be structured to include whatever time frame is needed for the particular therapeutic to achieve the desired result taking into account the occurrence rate they are trying to uncover. Toujeo may take longer to achieve lower A1c (that I'm not sure about), but I do know that the difference in hypoglycemia they are trying to prove is not a huge one... i.e. it would take more data. Toujeo simply does not have as significant clinical advantage as does Afrezza. The more pronounced the advantage, the smaller the trial needed to show it statistically. So there are limitations to comparing one trial against another. It does seem like you are possibly confusing variability in the population vs individual variability. In your bell curve example you are implying that if there is a wide bell curve for patients being treated for diabetes a reduction in A1c during the study might be explained by random variability. This simply isn't a valid way of looking at it. The bell curve is very wide because patients progress along it. Patients do not randomly improve and shift to the lower end of the scale. You'd need to have statistics about the changes typically seen in patients whose treatment remains the same... that would show that most patients being treated do not improve on their own without changes in their therapy. Having a control group gives hard data.
|
|
|
Post by tommix321 on Dec 22, 2015 15:56:11 GMT -5
tommix321 ... yes, I've taken quite a bit of statistics as I have Ph.D. in engineering with one focus area being communication theory which is based on statistics... detecting signals in noise. I do not claim to be an expert in clinical trial design but I do know safety studies must be much longer and include more people because they are looking for the dangerous needles in the haystack. A trial for superiority is not going to be as big as the required FDA safety study, and I would guess it really could be done in 4-6 months and involving hundreds not thousands. It isn't unreasonable to look at other trials as you cite, but you should keep in mind that the trial would naturally be structured to include whatever time frame is needed for the particular therapeutic to achieve the desired result taking into account the occurrence rate they are trying to uncover. Toujeo may take longer to achieve lower A1c (that I'm not sure about), but I do know that the difference in hypoglycemia they are trying to prove is not a huge one... i.e. it would take more data. Toujeo simply does not have as significant clinical advantage as does Afrezza. The more pronounced the advantage, the smaller the trial needed to show it statistically. So there are limitations to comparing one trial against another. It does seem like you are possibly confusing variability in the population vs individual variability. In your bell curve example you are implying that if there is a wide bell curve for patients being treated for diabetes a reduction in A1c during the study might be explained by random variability. This simply isn't a valid way of looking at it. The bell curve is very wide because patients progress along it. Patients do not randomly improve and shift to the lower end of the scale. You'd need to have statistics about the changes typically seen in patients whose treatment remains the same... that would show that most patients being treated do not improve on their own without changes in their therapy. Having a control group gives hard data. I'm afraid your assessment is terribly wrong. Patients do not "progress along" a bell curve. A bell curve is formed by randomly selecting patients from, say, a diabetic population, and measuring a variable, say A1C. You collect the A1C data from these individuals and graph it. In a normal distribution, the majority of measurements will cluster around the mean. Because of individual differences, different treatments, etc, there will be variability so that more rare measurements will be distributed further and further away from the mean -- this results in the "bell": bell curveThe width of the curve reflects how much variability in the population exists for whatever your measuring. In that link, the blue curve shows that population has relatively low variability for the measure versus the red curve. If you randomly select an individual from the red population, then you will have a lower probability of selecting someone who is closer to the mean than if you select one from the blue population. Statistics is about assessing the effect of variability in deciding whether a treatment is effective or not. To do that, you randomly select subjects from your affected population -- e.g. diabetics -- and then randomly assign them to either the experimental (afrezza) group or the comparison group (RAAs). Next, you measure your variable over whatever time period you choose and then calculate the means and standard deviations for each group. Then, you want to prove that the afrezza mean is better than the RAA mean. If the means are very far apart, then likely no problem. If they are closer together, then you have to use a formula based upon the standard deviation to assess whether the difference in the means is indeed too large to be accounted for by an "accident" due to large variability within the population. Now what you're really trying to determine is the degree of overlap in the graphs: overlap of bell curvesUsing that image as reference, if you put the graphs of the two groups (say green for afrezza and blue for RAA) on the same sheet of paper, then the overlap of the graphs should not exceed 5% of the area under the graphs. The "5%" is just an arbitrary number that statisticians agreed upon for proving "significance." If there is a lot of variability -- see link -- the graphs will be wide and overlap more, making the risk of no significant difference higher. About the only way to "narrow" the graphs -- see link -- and hence improve the chance of detecting a significant difference -- is by increasing the population size. Like I said, that's why SNY chose that peculiar number of 3,270 for the toujeo trial. Their statistician assessed the variability and was able to calculate the number of patients that would most likely be required to demonstrate a significant difference. Presuming SNY would chose A1C as the measure for an afrezza trial, then they would likely need to use a similar number of patients. As far as how SNY determined the length of time needed, that was likely related to the choice of variable -- A1C. The same would apply to afrezza. If they chose A1C, then they would need probably a minimum of six months to detect a difference and show repeatability. Factor in recruitment, logistics, analysis, etc, and you're looking -- most likely with extreme optimism -- at a minimum of a year.
|
|
|
Post by symbil on Dec 22, 2015 16:01:49 GMT -5
tommix321 = the new version of the same old rrtzmd. God this gets old.
|
|
|
Post by esstan2001 on Dec 22, 2015 16:05:12 GMT -5
tommix321 ... yes, I've taken quite a bit of statistics as I have Ph.D. in engineering with one focus area being communication theory which is based on statistics... detecting signals in noise. I do not claim to be an expert in clinical trial design but I do know safety studies must be much longer and include more people because they are looking for the dangerous needles in the haystack. A trial for superiority is not going to be as big as the required FDA safety study, and I would guess it really could be done in 4-6 months and involving hundreds not thousands. It isn't unreasonable to look at other trials as you cite, but you should keep in mind that the trial would naturally be structured to include whatever time frame is needed for the particular therapeutic to achieve the desired result taking into account the occurrence rate they are trying to uncover. Toujeo may take longer to achieve lower A1c (that I'm not sure about), but I do know that the difference in hypoglycemia they are trying to prove is not a huge one... i.e. it would take more data. Toujeo simply does not have as significant clinical advantage as does Afrezza. The more pronounced the advantage, the smaller the trial needed to show it statistically. So there are limitations to comparing one trial against another. It does seem like you are possibly confusing variability in the population vs individual variability. In your bell curve example you are implying that if there is a wide bell curve for patients being treated for diabetes a reduction in A1c during the study might be explained by random variability. This simply isn't a valid way of looking at it. The bell curve is very wide because patients progress along it. Patients do not randomly improve and shift to the lower end of the scale. You'd need to have statistics about the changes typically seen in patients whose treatment remains the same... that would show that most patients being treated do not improve on their own without changes in their therapy. Having a control group gives hard data. "It does seem like you are possibly confusing variability in the population vs individual variability. In your bell curve example you are implying that if there is a wide bell curve for patients being treated for diabetes a reduction in A1c during the study might be explained by random variability. This simply isn't a valid way of looking at it. The bell curve is very wide because patients progress along it. Patients do not randomly improve and shift to the lower end of the scale."? I'm afraid your assessment is terribly wrong. Patients do not "progress along" a bell curve. A bell curve is formed by randomly selecting patients from, say, a diabetic population, and measuring a variable, say A1C. You collect the A1C data from these individuals and graph it. In a normal distribution, the majority of measurements will cluster around the mean. Because of individual differences, different treatments, etc, there will be variability so that more rare measurements will be distributed further and further away from the mean -- this results in the "bell": bell curveThe width of the curve reflects how much variability in the population exists for whatever your measuring. In that link, the blue curve shows that population has relatively low variability for the measure versus the red curve. If you randomly select an individual from the red population, then you will have a lower probability of selecting someone who is closer to the mean than if you select one from the blue population. Statistics is about assessing the effect of variability in deciding whether a treatment is effective or not. To do that, you randomly select subjects from your affected population -- e.g. diabetics -- and then randomly assign them to either the experimental (afrezza) group or the comparison group (RAAs). Next, you measure your variable over whatever time period you choose and then calculate the means and standard deviations for each group. Then, you want to prove that the afrezza mean is better than the RAA mean. If the means are very far apart, then likely no problem. If they are closer together, then you have to use a formula based upon the standard deviation to assess whether the difference in the means is indeed too large to be accounted for by an "accident" due to large variability within the population. Now what you're really trying to determine is the degree of overlap in the graphs: overlap of bell curvesUsing that image as reference, if you put the graphs of the two groups on the same sheet of paper, then the overlap of the green and blue graphs should not exceed 5% of the area of the graphs. The "5%" is just an arbitrary number that statisticians agreed upon for proving "significance." If there is a lot of variability -- see link -- the graphs will be wide and overlap more, making the risk of no significant difference higher. About the only way to "narrow" the graphs -- see link -- and hence improve the chance of detecting a significant difference -- is by increasing the population size. Like I said, that's why SNY chose that peculiar number of 3,270 for the toujeo trial. Their statistician assessed the variability and was able to calculate the number of patients that would most likely be required to demonstrate a significant difference. As far as how SNY determined the length of time needed, that was likely related to the choice of variable --A1C. The same would apply to afrezza. If they chose A1C, then they would need probably a minimum of six months to detect a difference and show repeatability. Factor in recruitment, logistics, analysis, etc, and you're looking -- most likely with extreme optimism -- at a minimum of a year. Jesus H. Christ, I sure hope that SNY / MNKD will (get the FDA to) advance beyond A1C as the marker for superiority- include it, but include CGM data showing total blood glucose excursions, incidence of hypos, freedom of dose timing, etc such that the whole data package is overall a convincing slam dunk; I assume that the requisite paradigm change in metrics is what is taking so long to negotiate with the stuck in the mud FDEELAY bureaucracy.
|
|
|
Post by dreamboatcruise on Dec 23, 2015 13:50:11 GMT -5
tommix321 ... yes, I've taken quite a bit of statistics as I have Ph.D. in engineering with one focus area being communication theory which is based on statistics... detecting signals in noise. I do not claim to be an expert in clinical trial design but I do know safety studies must be much longer and include more people because they are looking for the dangerous needles in the haystack. A trial for superiority is not going to be as big as the required FDA safety study, and I would guess it really could be done in 4-6 months and involving hundreds not thousands. It isn't unreasonable to look at other trials as you cite, but you should keep in mind that the trial would naturally be structured to include whatever time frame is needed for the particular therapeutic to achieve the desired result taking into account the occurrence rate they are trying to uncover. Toujeo may take longer to achieve lower A1c (that I'm not sure about), but I do know that the difference in hypoglycemia they are trying to prove is not a huge one... i.e. it would take more data. Toujeo simply does not have as significant clinical advantage as does Afrezza. The more pronounced the advantage, the smaller the trial needed to show it statistically. So there are limitations to comparing one trial against another. It does seem like you are possibly confusing variability in the population vs individual variability. In your bell curve example you are implying that if there is a wide bell curve for patients being treated for diabetes a reduction in A1c during the study might be explained by random variability. This simply isn't a valid way of looking at it. The bell curve is very wide because patients progress along it. Patients do not randomly improve and shift to the lower end of the scale. You'd need to have statistics about the changes typically seen in patients whose treatment remains the same... that would show that most patients being treated do not improve on their own without changes in their therapy. Having a control group gives hard data. I'm afraid your assessment is terribly wrong. Patients do not "progress along" a bell curve. A bell curve is formed by randomly selecting patients from, say, a diabetic population, and measuring a variable, say A1C. You collect the A1C data from these individuals and graph it. In a normal distribution, the majority of measurements will cluster around the mean. Because of individual differences, different treatments, etc, there will be variability so that more rare measurements will be distributed further and further away from the mean -- this results in the "bell": bell curveThe width of the curve reflects how much variability in the population exists for whatever your measuring. In that link, the blue curve shows that population has relatively low variability for the measure versus the red curve. If you randomly select an individual from the red population, then you will have a lower probability of selecting someone who is closer to the mean than if you select one from the blue population. Statistics is about assessing the effect of variability in deciding whether a treatment is effective or not. To do that, you randomly select subjects from your affected population -- e.g. diabetics -- and then randomly assign them to either the experimental (afrezza) group or the comparison group (RAAs). Next, you measure your variable over whatever time period you choose and then calculate the means and standard deviations for each group. Then, you want to prove that the afrezza mean is better than the RAA mean. If the means are very far apart, then likely no problem. If they are closer together, then you have to use a formula based upon the standard deviation to assess whether the difference in the means is indeed too large to be accounted for by an "accident" due to large variability within the population. Now what you're really trying to determine is the degree of overlap in the graphs: overlap of bell curvesUsing that image as reference, if you put the graphs of the two groups (say green for afrezza and blue for RAA) on the same sheet of paper, then the overlap of the graphs should not exceed 5% of the area under the graphs. The "5%" is just an arbitrary number that statisticians agreed upon for proving "significance." If there is a lot of variability -- see link -- the graphs will be wide and overlap more, making the risk of no significant difference higher. About the only way to "narrow" the graphs -- see link -- and hence improve the chance of detecting a significant difference -- is by increasing the population size. Like I said, that's why SNY chose that peculiar number of 3,270 for the toujeo trial. Their statistician assessed the variability and was able to calculate the number of patients that would most likely be required to demonstrate a significant difference. Presuming SNY would chose A1C as the measure for an afrezza trial, then they would likely need to use a similar number of patients. As far as how SNY determined the length of time needed, that was likely related to the choice of variable -- A1C. The same would apply to afrezza. If they chose A1C, then they would need probably a minimum of six months to detect a difference and show repeatability. Factor in recruitment, logistics, analysis, etc, and you're looking -- most likely with extreme optimism -- at a minimum of a year. Well, indeed your assumption that it is a bell is likely incorrect. So I will correct and simply say "curve" rather than "bell curve". The curve of A1c for a population of patients with diabetes is not a random variable and the curve is therefore not necessarily a bell curve representing random variation around a mean. Patients do progress from one end of the curve to the other. You can't simply pretend any given metric in life is a random distribution around a mean. People designing clinical trials for diabetes certainly know A1c in a patient population is not random variation around a mean.
|
|
Deleted
Deleted Member
Posts: 0
|
Post by Deleted on Dec 23, 2015 14:38:09 GMT -5
"It does seem like you are possibly confusing variability in the population vs individual variability. In your bell curve example you are implying that if there is a wide bell curve for patients being treated for diabetes a reduction in A1c during the study might be explained by random variability. This simply isn't a valid way of looking at it. The bell curve is very wide because patients progress along it. Patients do not randomly improve and shift to the lower end of the scale."? I'm afraid your assessment is terribly wrong. Patients do not "progress along" a bell curve. A bell curve is formed by randomly selecting patients from, say, a diabetic population, and measuring a variable, say A1C. You collect the A1C data from these individuals and graph it. In a normal distribution, the majority of measurements will cluster around the mean. Because of individual differences, different treatments, etc, there will be variability so that more rare measurements will be distributed further and further away from the mean -- this results in the "bell": bell curveThe width of the curve reflects how much variability in the population exists for whatever your measuring. In that link, the blue curve shows that population has relatively low variability for the measure versus the red curve. If you randomly select an individual from the red population, then you will have a lower probability of selecting someone who is closer to the mean than if you select one from the blue population. Statistics is about assessing the effect of variability in deciding whether a treatment is effective or not. To do that, you randomly select subjects from your affected population -- e.g. diabetics -- and then randomly assign them to either the experimental (afrezza) group or the comparison group (RAAs). Next, you measure your variable over whatever time period you choose and then calculate the means and standard deviations for each group. Then, you want to prove that the afrezza mean is better than the RAA mean. If the means are very far apart, then likely no problem. If they are closer together, then you have to use a formula based upon the standard deviation to assess whether the difference in the means is indeed too large to be accounted for by an "accident" due to large variability within the population. Now what you're really trying to determine is the degree of overlap in the graphs: overlap of bell curvesUsing that image as reference, if you put the graphs of the two groups on the same sheet of paper, then the overlap of the green and blue graphs should not exceed 5% of the area of the graphs. The "5%" is just an arbitrary number that statisticians agreed upon for proving "significance." If there is a lot of variability -- see link -- the graphs will be wide and overlap more, making the risk of no significant difference higher. About the only way to "narrow" the graphs -- see link -- and hence improve the chance of detecting a significant difference -- is by increasing the population size. Like I said, that's why SNY chose that peculiar number of 3,270 for the toujeo trial. Their statistician assessed the variability and was able to calculate the number of patients that would most likely be required to demonstrate a significant difference. As far as how SNY determined the length of time needed, that was likely related to the choice of variable --A1C. The same would apply to afrezza. If they chose A1C, then they would need probably a minimum of six months to detect a difference and show repeatability. Factor in recruitment, logistics, analysis, etc, and you're looking -- most likely with extreme optimism -- at a minimum of a year. Jesus H. Christ, I sure hope that SNY / MNKD will (get the FDA to) advance beyond A1C as the marker for superiority- include it, but include CGM data showing total blood glucose excursions, incidence of hypos, freedom of dose timing, etc such that the whole data package is overall a convincing slam dunk; I assume that the requisite paradigm change in metrics is what is taking so long to negotiate with the stuck in the mud FDEELAY bureaucracy. Before we had CGM the A1c was the standard as it gave an average blood glucose reading but to your point Esstan, it is the excursions or those wild swings in blood glucose levels that cause the greatest long term health complications to people with diabetes. I have to believe that somewhere, Sanofi, in a clinical trial or a trial with a managed care player is using CGMs to make a case for reduced volatility in blood glucose levels for patients on Afrezza. For further reading, see the link below for the Diabetes Control & Complications Trial completed many years ago that showed tight control of blood glucose levels results in massive reductions (like 65%+) in long term health complications. Sanofi has to be beating this drum to the payors as we move from fee for service to payment for clinical outcomes / improvements. www.niddk.nih.gov/about-niddk/research-areas/diabetes/dcct-edic-diabetes-control-complications-trial-follow-up-study/Documents/DCCT-EDIC_508.pdfPS - given that Sanofi has a relationship with Google and Dexcom also has a relationship with Google and that three of Dexcom's senior executives worked for Al Mann at minimed it would be in everyones best interest to set up one of these trials (FDA approved or otherwise). Benefits to MNKD and SNY are obvious and it helps keep Dexcom on the front page of diabetes technology and gets Google additional kudos for their data warehousing / analysis prowess.
|
|
|
Post by tommix321 on Dec 24, 2015 1:04:17 GMT -5
I'm afraid your assessment is terribly wrong. Patients do not "progress along" a bell curve. A bell curve is formed by randomly selecting patients from, say, a diabetic population, and measuring a variable, say A1C. You collect the A1C data from these individuals and graph it. In a normal distribution, the majority of measurements will cluster around the mean. Because of individual differences, different treatments, etc, there will be variability so that more rare measurements will be distributed further and further away from the mean -- this results in the "bell": bell curveThe width of the curve reflects how much variability in the population exists for whatever your measuring. In that link, the blue curve shows that population has relatively low variability for the measure versus the red curve. If you randomly select an individual from the red population, then you will have a lower probability of selecting someone who is closer to the mean than if you select one from the blue population. Statistics is about assessing the effect of variability in deciding whether a treatment is effective or not. To do that, you randomly select subjects from your affected population -- e.g. diabetics -- and then randomly assign them to either the experimental (afrezza) group or the comparison group (RAAs). Next, you measure your variable over whatever time period you choose and then calculate the means and standard deviations for each group. Then, you want to prove that the afrezza mean is better than the RAA mean. If the means are very far apart, then likely no problem. If they are closer together, then you have to use a formula based upon the standard deviation to assess whether the difference in the means is indeed too large to be accounted for by an "accident" due to large variability within the population. Now what you're really trying to determine is the degree of overlap in the graphs: overlap of bell curvesUsing that image as reference, if you put the graphs of the two groups (say green for afrezza and blue for RAA) on the same sheet of paper, then the overlap of the graphs should not exceed 5% of the area under the graphs. The "5%" is just an arbitrary number that statisticians agreed upon for proving "significance." If there is a lot of variability -- see link -- the graphs will be wide and overlap more, making the risk of no significant difference higher. About the only way to "narrow" the graphs -- see link -- and hence improve the chance of detecting a significant difference -- is by increasing the population size. Like I said, that's why SNY chose that peculiar number of 3,270 for the toujeo trial. Their statistician assessed the variability and was able to calculate the number of patients that would most likely be required to demonstrate a significant difference. Presuming SNY would chose A1C as the measure for an afrezza trial, then they would likely need to use a similar number of patients. As far as how SNY determined the length of time needed, that was likely related to the choice of variable -- A1C. The same would apply to afrezza. If they chose A1C, then they would need probably a minimum of six months to detect a difference and show repeatability. Factor in recruitment, logistics, analysis, etc, and you're looking -- most likely with extreme optimism -- at a minimum of a year. Well, indeed your assumption that it is a bell is likely incorrect. So I will correct and simply say "curve" rather than "bell curve". The curve of A1c for a population of patients with diabetes is not a random variable and the curve is therefore not necessarily a bell curve representing random variation around a mean. Patients do progress from one end of the curve to the other. You can't simply pretend any given metric in life is a random distribution around a mean. People designing clinical trials for diabetes certainly know A1c in a patient population is not random variation around a mean. I am not "pretending" anything. I did not say there was "random variation around a mean." What I did say was that, in order to draw the bell curve for A1C, you first have to randomly sample the diabetic population -- i.e. choose at random individuals from that population. You then measure each individual's A1C and plot those points on a graph. For example, look at this graph: diabetes bell curvesThose are two bell curves comparing the distribution of hemoglobin A1c levels in patients with and without diabetes mellitus undergoing CABG surgery. The mean for the DM patients is 8.0 with a standard deviation of 2. The mean for the non-diabetics is 6.2 with a standard deviation of 0.9. Note that this illustrates another point. The DM's larger standard deviation indicates that variability of the A1C in that population is over twice that of non-diabetics. Because of that increased variability, any trial that is going to try and detect a difference in effect between RAA and afrezza on the mean A1C for each group will have to be substantially larger. As I said before, that is no doubt the reason Sanofi chose that odd 3,270 for the number of subjects in its toujeo/lantus trial. Their statistician determined that's how many would be needed to to detect a significant difference in effect.
|
|
|
Post by tommix321 on Dec 24, 2015 1:23:10 GMT -5
Jesus H. Christ, I sure hope that SNY / MNKD will (get the FDA to) advance beyond A1C as the marker for superiority- include it, but include CGM data showing total blood glucose excursions, incidence of hypos, freedom of dose timing, etc such that the whole data package is overall a convincing slam dunk; I assume that the requisite paradigm change in metrics is what is taking so long to negotiate with the stuck in the mud FDEELAY bureaucracy. Before we had CGM the A1c was the standard as it gave an average blood glucose reading but to your point Esstan, it is the excursions or those wild swings in blood glucose levels that cause the greatest long term health complications to people with diabetes. I have to believe that somewhere, Sanofi, in a clinical trial or a trial with a managed care player is using CGMs to make a case for reduced volatility in blood glucose levels for patients on Afrezza. For further reading, see the link below for the Diabetes Control & Complications Trial completed many years ago that showed tight control of blood glucose levels results in massive reductions (like 65%+) in long term health complications. Sanofi has to be beating this drum to the payors as we move from fee for service to payment for clinical outcomes / improvements. www.niddk.nih.gov/about-niddk/research-areas/diabetes/dcct-edic-diabetes-control-complications-trial-follow-up-study/Documents/DCCT-EDIC_508.pdfPS - given that Sanofi has a relationship with Google and Dexcom also has a relationship with Google and that three of Dexcom's senior executives worked for Al Mann at minimed it would be in everyones best interest to set up one of these trials (FDA approved or otherwise). Benefits to MNKD and SNY are obvious and it helps keep Dexcom on the front page of diabetes technology and gets Google additional kudos for their data warehousing / analysis prowess. Note where it said what they measured: "Intensive control meant keeping hemoglobin A1C levels as close as possible to the normal value of 6 percent or less." So, first, you would have to prove that using a CGM would provide results superior to A1C in terms of predicting long term complications. Once again it becomes a cost-effectiveness issue. CGMs would clearly provide better tracking, but would the difference in expense -- since CGMs are much more expensive than A1Cs -- justify conducting a trial that would prove it more useful than A1C in terms of predicting complications?
|
|
|
Post by jpg on Dec 24, 2015 2:32:15 GMT -5
Well, indeed your assumption that it is a bell is likely incorrect. So I will correct and simply say "curve" rather than "bell curve". The curve of A1c for a population of patients with diabetes is not a random variable and the curve is therefore not necessarily a bell curve representing random variation around a mean. Patients do progress from one end of the curve to the other. You can't simply pretend any given metric in life is a random distribution around a mean. People designing clinical trials for diabetes certainly know A1c in a patient population is not random variation around a mean. I am not "pretending" anything. I did not say there was "random variation around a mean." What I did say was that, in order to draw the bell curve for A1C, you first have to randomly sample the diabetic population -- i.e. choose at random individuals from that population. You then measure each individual's A1C and plot those points on a graph. For example, look at this graph: diabetes bell curvesThose are two bell curves comparing the distribution of hemoglobin A1c levels in patients with and without diabetes mellitus undergoing CABG surgery. The mean for the DM patients is 8.0 with a standard deviation of 2. The mean for the non-diabetics is 6.2 with a standard deviation of 0.9. Note that this illustrates another point. The DM's larger standard deviation indicates that variability of the A1C in that population is over twice that of non-diabetics. Because of that increased variability, any trial that is going to try and detect a difference in effect between RAA and afrezza on the mean A1C for each group will have to be substantially larger. As I said before, that is no doubt the reason Sanofi chose that odd 3,270 for the number of subjects in its toujeo/lantus trial. Their statistician determined that's how many would be needed to to detect a significant difference in effect. I agree with DBC. I don't follow your point. Very large samples are needed when you compare drugs with relatively similar PK/PD profiles. Used well (based on it's PK/PD profile: using after meld start and bigger bolus dosing) Afrezza would be very different. Toujeo and Lantus are extremely similar o you therefor need very big sample sizes to show a small difference. Your bell curve thing is confusing (to me anyway). It's as if you couldn't do any multi regression analysis and all your patents act in a binary fashion (on/off or diabetic or not). That doesn't mke sense to me or maybe i m not getting your point on mean deviations? HbA1C is a marker for severity of disease not a binary + or minus. The artificial cutoffs you describe get you included in a study (or not) but the binary nature of the data analysis stops there...
|
|
Deleted
Deleted Member
Posts: 0
|
Post by Deleted on Dec 24, 2015 7:57:16 GMT -5
Before we had CGM the A1c was the standard as it gave an average blood glucose reading but to your point Esstan, it is the excursions or those wild swings in blood glucose levels that cause the greatest long term health complications to people with diabetes. I have to believe that somewhere, Sanofi, in a clinical trial or a trial with a managed care player is using CGMs to make a case for reduced volatility in blood glucose levels for patients on Afrezza. For further reading, see the link below for the Diabetes Control & Complications Trial completed many years ago that showed tight control of blood glucose levels results in massive reductions (like 65%+) in long term health complications. Sanofi has to be beating this drum to the payors as we move from fee for service to payment for clinical outcomes / improvements. www.niddk.nih.gov/about-niddk/research-areas/diabetes/dcct-edic-diabetes-control-complications-trial-follow-up-study/Documents/DCCT-EDIC_508.pdfPS - given that Sanofi has a relationship with Google and Dexcom also has a relationship with Google and that three of Dexcom's senior executives worked for Al Mann at minimed it would be in everyones best interest to set up one of these trials (FDA approved or otherwise). Benefits to MNKD and SNY are obvious and it helps keep Dexcom on the front page of diabetes technology and gets Google additional kudos for their data warehousing / analysis prowess. Note where it said what they measured: "Intensive control meant keeping hemoglobin A1C levels as close as possible to the normal value of 6 percent or less." So, first, you would have to prove that using a CGM would provide results superior to A1C in terms of predicting long term complications. Once again it becomes a cost-effectiveness issue. CGMs would clearly provide better tracking, but would the difference in expense -- since CGMs are much more expensive than A1Cs -- justify conducting a trial that would prove it more useful than A1C in terms of predicting complications? CGM provides better i.e. actionable data compared to A1C as the latter does not measure volatility of blood glucose levels which are the main culprit in terms of long term health complications. Reduced average blood glucose levels combined with reduced volatility of blood glucose levels = vast reductions in long term health complications in something like 60%+ of the patients. At the time of the DCCT, as you know, CGM was not available. The DCCT proved tight control of glucose levels reduces long term health complications significantly with only A1c as a measuring stick. The additional SBGM data from DCCT would not have provided as much insight into BG levels that CGM will but it will be borne out. Dexcom's newest sensor, the G5 sends data directly to a smart phone www.dexcom.com/g5-mobile-cgmDexcom is working on another sensor that will be smaller and less expensive. This sensor will be sold at traditional retail pharmacies and Dexcom has already put people into place to sell to this segment of the retail trade. Cost of sensor and Afrezza is minuscule compared to treating diabetic retinopathy (recurring laser treatments), kidney issues, CV problems, diabetic foot ulcers, amputation etc. So what will happen in the future for the masses and what can happen today for patients using the G5 is data goes to the smartphone where it is auto uploaded to Google's health cloud (no idea what it's official name is but you get the idea). Google then analyzes massive amounts of data, determines mean BG levels and volatility (std deviation) for the entire population as well as many subsets, determines what types of data = minimal long term health risk, what type of data = moderate long term health risk, what type of data = significant long term health risk, sells data to big pharma so they can ID patients, work with patients to make them more compliant and healthier and then Rx co goes to payor and gets paid not on fee for service but for improved patient outcomes. Now is this exactly how it will play out, no but you get the idea where it is headed. Google wants the data, that is what they do.
|
|
|
Post by uvula on Dec 24, 2015 9:06:51 GMT -5
First of all, there might be some disagreement here but I appreciate the intelligent debate.
Secondly, I think we can all agree that tighter control AND lower A1c levels are better than either 1 by itself even though there is probably no real world data proving this. Afrezza is key to achieving both at the same time.
|
|
|
Post by dreamboatcruise on Dec 24, 2015 15:50:00 GMT -5
Well, indeed your assumption that it is a bell is likely incorrect. So I will correct and simply say "curve" rather than "bell curve". The curve of A1c for a population of patients with diabetes is not a random variable and the curve is therefore not necessarily a bell curve representing random variation around a mean. Patients do progress from one end of the curve to the other. You can't simply pretend any given metric in life is a random distribution around a mean. People designing clinical trials for diabetes certainly know A1c in a patient population is not random variation around a mean. I am not "pretending" anything. I did not say there was "random variation around a mean." What I did say was that, in order to draw the bell curve for A1C, you first have to randomly sample the diabetic population -- i.e. choose at random individuals from that population. You then measure each individual's A1C and plot those points on a graph. For example, look at this graph: diabetes bell curvesThose are two bell curves comparing the distribution of hemoglobin A1c levels in patients with and without diabetes mellitus undergoing CABG surgery. The mean for the DM patients is 8.0 with a standard deviation of 2. The mean for the non-diabetics is 6.2 with a standard deviation of 0.9. Note that this illustrates another point. The DM's larger standard deviation indicates that variability of the A1C in that population is over twice that of non-diabetics. Because of that increased variability, any trial that is going to try and detect a difference in effect between RAA and afrezza on the mean A1C for each group will have to be substantially larger. As I said before, that is no doubt the reason Sanofi chose that odd 3,270 for the number of subjects in its toujeo/lantus trial. Their statistician determined that's how many would be needed to to detect a significant difference in effect. Well, I'm not sure we're getting anywhere here, but I'll restate one more way before leaving the topic. What you are trying to detect is a CHANGE. Comparing an individual after a new treatment with where they were before the treatment... in it's most straightforward form, if you found an entire population of patients that had progressed to the same level of diabetes (e.g. all had A1c of 8%), all are on SQ RAA... you then take half those people and switch them to Afrezza. One would then look at the CHANGE from 8%. It is the variability in this CHANGE that would be relevant. Is the CHANGE seen in the Afrezza group statistically distinguishable from the variation seen in the control? Of course the change can be looked at even if everyone in the study has a different starting point. The fact that there is a wide distribution among the entire population is irrelevant because it is well understood that patients don't randomly jump around in the state of disease progression (and that would be the fact in the control group). So the fact that many less progressed patients may have an A1c of 7% instead of 8% isn't relevant to looking at how much one treatment lowers A1c vs another treatment. Now if one were to design a trial that didn't look at anyone's A1c before starting the new treatment, you'd need to consider that wide distribution of A1c that you state... take a bunch of people with diabetes put them on Afrezza without asking their A1c and then comparing the resulting curve after treatment with that standard curve you linked to... of course that would be a horrible way to design a study, and require a much larger number of participants. Hopefully we'll learn soon enough how many and how long the trial will be. The number of patients will probably hinge on what degree of change in risk of hypo is deemed relevant as a risk factor... e.g. one protocol might be to try to get everyone within acceptable A1c level, and then the question would be do they need to show that hypos increase by no more than 1% or no more than 0.1%. There would seem to be lots of different approaches to design an end point for superiority.
|
|
Deleted
Deleted Member
Posts: 0
|
Post by Deleted on Jan 6, 2016 14:01:23 GMT -5
DO we know for sure this event was by and for Sanofi?
|
|
|
Post by kdaddyfresh2000 on Jan 6, 2016 14:07:29 GMT -5
DO we know for sure this event was by and for Sanofi? Yes it was. I talked to Sam and Eric about this. Unbelievable duplicity from Sanofi. Want to bet management does nothing despite shareholders getting screwed? Feckless bunch.
|
|