tag:statalist.1588530.n2.nabble.com,2006:forum-1588530Nabble - Statalist2017-02-19T12:28:44Z<h2>Statalist is no longer active and has been moved to a forum at <a href="http://www.statalist.org/" target="_blank" rel="nofollow" link="external">www.statalist.org</a>.
Posts made to Statalist via nabble will not be read by anyone so don't expect any response.
These archives persist as a historical archive of some of the activity on Statalist.</h2>tag:statalist.1588530.n2.nabble.com,2006:post-6596271Impulse-Response to exogenous VAR variables2011-07-18T13:28:42Z2011-07-18T13:28:42ZWeonho
Hello~
<br/><br/>I am trying to obtain the impulse response functions (IRF) of the endogenous variables of a vector autoregression to a shock in an exogenous variable. I can't seem to figure out a way to do it.
<br/><br/>In using the 'irf create' command, I can get the response of endogenous variables
<br/>to only endogenous ones
<br/><br/>How can I solve this problem?
<br/><br/>Thanks for any suggestions in advance
<br/><br/>Best regards,
<br/><br/>Yang
tag:statalist.1588530.n2.nabble.com,2006:post-5552554-xtwest- requires continous time series2010-09-20T15:24:40Z2010-09-20T15:24:40Zwangpan110
Hi all, i am running a panel dataset and want to examine the panel cointegration.
<br/><br/>1) I use -xtwest- but the screen pops up "countinous time series are required. Following series contain holes". Does it mean my dependent variable with lots missing values? if it is, how can i cope with these missing values under -xtwest-?
<br/><br/>2) Does any one know the ado file for the Pedroni panel cointegration, could you share with me, please.
<br/><br/>Any help is appreciated.
<br/><br/>Pan
tag:statalist.1588530.n2.nabble.com,2006:post-7580590i can't SET off more.2016-06-05T06:30:40Z2016-06-05T06:42:24ZStigy
I tried Eventstudy..
<br/><br/>but I can't Estimating Normal Performance..
<br/><br/> set more off /*this command just keeps stata from pausing after each screen of output */
<br/><br/> gen predicted_return=.
<br/> egen id=group(CODE)
<br/> /* for multiple event dates, use: egen id = group(group_id) */
<br/> forvalues i=1(1)N { /*note: replace N with the highest value of id */
<br/> 1 id CODE if id==`1' & dif==0
<br/> reg ret Market_return if id==`i' & estimation_window==1
<br/> predict p if id==`i'
<br/> replace predicted_return = p if id==`i' & event_window==1
<br/> drop p
<br/> }
<br/><br/>this command input but invalid syntax error..r(198);
<br/><br/>what's the problem??
tag:statalist.1588530.n2.nabble.com,2006:post-7580589Monetary Policy Sock2016-04-15T20:23:28Z2016-04-15T20:23:28ZNusrat
Dear All,
<br/>I'm trying to estimate a structural VAR model and see the impact of monetary policy shock. I know that ordering is important for the identification of structural shocks. In the ordering of variables I have ordered my monetary policy variable 3rd from the top something like this
<br/>Y=(y p i mr m hp mb) where y, p, mr, m, hp and mb are the other variables.
<br/>I have used the following codes to run my estimation. I'm not sure if I'm doing this correctly.
<br/>. matrix A = (1,0,0,0,0,0,0\.,1,0,0,0,0,0\.,.,1,0,0,0,0\.,.,.,1,0,0,0\.,.,.,.,1,0,0\.,.,.,.,.,1,0\.,.,.,.,.,.,1)
<br/>. matrix B =(.,0,0,0,0,0,0\0,.,0,0,0,0,0\0,0,.,0,0,0,0\0,0,0,.,0,0,0\0,0,0,0,.,0,0\0,0,0,0,0,.,0\0,0,0,0,0,0,.)
<br/><br/>. svar dln_gdp dln_cpi irate mrate1 dln_mor dln_hp dln_mb, aeq(A) beq(B) lags(1/4)
<br/><br/>It'd really be helpful if I could ge some useful comments on this.
<br/>Thank you.
tag:statalist.1588530.n2.nabble.com,2006:post-7580588Spider graph - or something else?2016-03-23T14:14:59Z2016-03-23T14:14:59Zstarpen
Hey guys.
<br/><br/>I've been an avid reader of this list for quite some time but I've never contributed in any way or form - I'll change that now (cynics would argue that I only want to participate now that I have a problem).
<br/><br/>Okay, so a long version short. I have collected data from several classrooms about their teachers. So a class of 30 students have answered questions about 1 teacher. I've constructed 8 scales from the answers and I want to be able to show the teachers the average score on each of the 8 scales in one chart. I've found the "radar"-program and I've tried graph7, star but they aren't doing what I want them to. I basically want the 8 scores placed on each of the radars/starts lines and marked on the line where the score is. The ideal is to have a mark, the next best thing is a connected line between the lines in the radar/star. I've attached a picture with the basics of what I'm looking for (but this does not have the markings/connected lines). I think one of my problems is, that I have 8 variables (1 for each scale) that only contain the score when I use the mean.
<br/><br/><img src="http://statalist.1588530.n2.nabble.com/file/n7580588/Radar-Chart-Example.jpg" border="0"/><br/><br/>The closest I've come is:
<br/><br/>graph pie Q1 Q2 Q3 Q4 Q5 Q6 Q7 Q8
<br/><br/>Which is horrible, to say the least.
<br/><br/>Thank you for your help!
tag:statalist.1588530.n2.nabble.com,2006:post-7580586Set up svyset in cross sectional design survey research2016-03-07T13:26:17Z2016-03-07T13:26:17Zhamzah
Dear all colleges,
<br/><br/>I have already a large-scale survey of basic health research. Providing set basic data that was collected through a national survey so we can use its result to arrange a health policy to the district/city level the survey had collected information from 258,366 households sampled and 987,205 household members sampled for measuring many public health indicators. The sampling design of survey was using "two stage sampling" The result of this design required special treatment in which process using conventional statistic by complex samples in SPSS or svyset in STATA to make it possible to utilize two stage sampling designs, in processing and analyzing the data set, the validity of analysis result can be optimized.
<br/>In case, the dataset that I have also already consist of variable "PSU, Weight, Inflate and Strata" According information above and sampling design. I would like to ask you some questions.
<br/><br/>The first question.
<br/>The set-up of dataset with "svyset" in beginning the process. Which ones the better writing of the syntax and what different the result if I want prefer choose one of them like below
<br/><br/>1st syntax svyset
<br/>svyset psu [pweight=inflate], strata(strata), vce(linearized), singleunit(missing)
<br/>pweight: inflate
<br/>VCE: linearized
<br/>Single unit: missing
<br/>Strata 1: strata
<br/>SU 1: psu
<br/>FPC 1: <zero><br/><br/> or….
<br/><br/>2nd syntax svyset
<br/>. svyset [pw=weight], strata(strata) psu(psu)
<br/> pweight: weight
<br/> VCE: linearized
<br/> Single unit: missing
<br/> Strata 1: strata
<br/> SU 1: psu
<br/> FPC 1: <zero><br/><br/>Or
<br/>3rd syntax svyset
<br/>svyset [pweight = inflate],strata(strata)
<br/> pweight: inflate
<br/> VCE: linearized
<br/> Single unit: missing
<br/> Strata 1: strata
<br/> SU 1: <observations><br/> FPC 1: <zero><br/><br/>
<br/><br/>The second question.
<br/>Even thought the Sampling of the survey design was using two stage sampling, using from svydescribe we only get only the data in one stages. what does the meaning?
<br/><br/>. svydescribe
<br/><br/>Survey: Describing stage 1 sampling units
<br/>pweight: inflate
<br/>VCE: linearized
<br/>Single unit: missing
<br/>Strata 1: strata
<br/>SU 1: <observations><br/>FPC 1: <zero><br/><br/>
<br/>Could you please give me advance advice.
<br/>Sincerely yours
<br/><br/>
tag:statalist.1588530.n2.nabble.com,2006:post-7580385xtmelogit: negative variance?2013-06-25T04:11:02Z2013-06-25T04:19:12ZHanne F
Hi,
<br/><br/>I am currently using a two-level random slope model, logistic regression. The reason I use multilevel is to control for a possible design effect. I am using xtmelogit, but I already tried glamm and xtlogit as well. I have data for three measurements. When running the same model on the first two measurements, I do not experience any problems. By contrast, for the third measurement, I do not seem to get an accurate estimation of the second level variance. Sd_cons is virtually zero, and the confidence interval is also odd: I get exactly 0 as a lower point, and a '.' as upper bound. The p-value indicating whether or not it is necessary to use a multilevel design gives exactly zero. Can anyone tell me why this is the case?
<br/>In a multilevel course based on MLWin I learned that in this case you should 'allow for negative variance'. Can anybody please tell me how I could do this with Stata?
<br/><br/>thank you very much in advance!!
<br/><br/>best regards,
<br/>Hanne
tag:statalist.1588530.n2.nabble.com,2006:post-4494919st: Sample selection in fixed effects model2010-02-01T08:05:35Z2010-02-01T08:05:35Zmw N. van Horen
<p>Dear Statalisters<br>
<br>
Does anyone know whether a user-written program is currently available in stata to estimate a sample selection model for panel data (using fixed effects). There was some discussion on the subject in <a href="http://www.stata.com/statalist/archive/2005-04/msg00112.html" target="_top" rel="nofollow" link="external"><u><font color="#0000FF">http://www.stata.com/statalist/archive/2005-04/msg00112.html</font></u></a> from which I understand it was not implemented in 2005. Does anyone know whether this has changed? Or is anyone familiar with a program written in another statistical package (matlab, gauss etc). <br>
<br>
Thank you very much for your help. <br>
<br>
Best,<br>
Neeltje<br>
<br>
<br>
<br>
De Nederlandsche Bank <br>
Economic Policy and Research Division / Research Department <br>
tel: +31-20-5245704<br>
<br>
Access my research papers at: <br>
<a href="http://www.dnb.nl/onderzoek/onderzoekers/persoonlijke-paginas/auto171982.jsp" target="_top" rel="nofollow" link="external">http://www.dnb.nl/onderzoek/onderzoekers/persoonlijke-paginas/auto171982.jsp</a><br>
<a href="http://ssrn.com/author=118741" target="_top" rel="nofollow" link="external">http://ssrn.com/author=118741</a><br>
<a href="http://ideas.repec.org/f/pva277.html" target="_top" rel="nofollow" link="external">http://ideas.repec.org/f/pva277.html</a><br>
<br>
<br>
<font size="2" face="Arial">Please consider the environment before printing this email. </font><br>
<br>
<font size="2" face="Arial">De informatie verzonden met dit e-mailbericht is vertrouwelijk en uitsluitend bestemd voor de geadresseerde. Indien u als niet-geadresseerde dit bericht ontvangt, wordt u verzocht direct de afzender hierover te informeren en het bericht te vernietigen. Gebruik van informatie door onbevoegden, openbaarmaking of vermenigvuldiging is verboden en kan leiden tot aansprakelijkheid. De afzender is niet aansprakelijk in geval van onjuiste overbrenging van het e-mailbericht en/of bij ontijdige ontvangst daarvan.</font><br>
<br>
<font size="2" face="Arial">The information transmitted is confidential and intended only for the person or entity to whom or which it is addressed. If you are not the intended recipient of this communication, please inform us immediately and destroy this communication. Unauthorised use, disclosure or copying of information is strictly prohibited and may entail liability. The sender accepts no liability for improper transmission of this communication nor for any delay in its receipt.</font><br>
tag:statalist.1588530.n2.nabble.com,2006:post-7580583New command - conindex - available from SSC2015-11-15T10:08:06Z2015-11-15T10:08:06ZStephen ONeill
Dear Stata colleagues,
<br/><br/>Thanks to Kit Baum, conindex is now available from SSC. This command computes a range of rank-dependent inequality indices, including the Gini coefficient, the concentration index, the generalized (Gini) concentration index, the modified concentration index, the Wagstaff and Erreygers normalised concentration indices for bounded variables, and the distributionally sensitive extended and symmetric concentration indices (and their generalized versions).
<br/><br/>Sincerely,
<br/>Owen O'Donnell, Stephen O'Neill, Tom Van Ourti and Brendan Walsh
tag:statalist.1588530.n2.nabble.com,2006:post-7580582problem2015-11-10T14:21:47Z2015-11-10T14:21:47Zsted
my list of variables is described below:
<br/><br/>female age wage satisfied unversity married kids work health
<br/>i couldnot run probit regression model. it shows r(2000) error
<br/>. probit satisfied wage age female unversity married kids work health
<br/><br/>outcome does not vary; remember:
<br/> 0 = negative outcome,
<br/> all other nonmissing values = positive outcome
<br/>r(2000);
<br/><br/>.
<br/>how can i deal with this problem
<br/>
tag:statalist.1588530.n2.nabble.com,2006:post-7580203Propensity Score Matching2012-06-16T01:12:25Z2012-06-16T01:12:25Zniken kusumawardhani
Hi all,
<br/><br/>I have a question on Propensity Score Matching. I'm trying to evaluate the impact of migration on children's schooling. My data is cross-section and I do not have child-level data at time before migration occured. But I have data on household-level at time before migration occured. Therefore, I decided to match based on household-level data, since it is measured before participation into migration.
<br/><br/>Since my outcome is at individual-level, there might be some individual characteristics that affect my outcome. Estimating the impact of migration by propensity score matching constructed based on household-level variables won't be enough. My question is, can I estimate the impact of migration using propensity score matching (covariates used are household-level) and also incorporate some individual-level variables?
<br/><br/>I'm thinking of estimating such model:
<br/><br/>Sij = Mj + Gj + Aij + Bij + e
<br/><br/>For Sij = schooling of child i at household j
<br/>Mj = 1 for migrant household, 0 for household without migrant
<br/>Gj = propensity score for household j (the same for all kids at one household)
<br/>Aij = for example age of the child i
<br/>Bij = for example sex of the child i
<br/>and then since children in household are related, I'm gonna cluster the standard errors at household level.
<br/><br/>Is that possible to do this with Propensity Score Matching? Could someone tell me how to do it in Stata?
<br/><br/>I've read a lot of references using PSM, but none of it has additional variables to predict the ATT like in my problems.
tag:statalist.1588530.n2.nabble.com,2006:post-7580449Initial values not feasible using meologit2013-10-31T11:12:27Z2013-10-31T11:12:27Zmarciaipj
Hi all,
<br/><br/>I was hoping you could help me. I am trying to run a cross random effects model in Stata 13.
<br/>But so far, I have not been able to pass from the error message "initial values not feasible" .
<br/>I have tried to change the start values to the 4 possible options (zero, iv, constantonly and fixedonly), I have also tried to change the startgrid option (although I am sure what that is) and finally I tried running a simpler model to get some initial values and then use them as initial estimates for a more complex model. But I still cannot fix the problem.
<br/>This is the code I have used to get initial estimates from a simpler model:
<br/><br/>*Obtain initial values from a simpler model
<br/>ologit happy female c.age##c.age
<br/>mat x=e(b)
<br/>meologit happy female c.age##c.age|| coh5: || year: , from (x)
<br/><br/>Can someone please help me? Are there any other options to solve this? Also, I read that using the command evaltype(gf0) might help, but what does that mean?
<br/><br/>Thanks!
<br/>Marcia
<br/><br/>
tag:statalist.1588530.n2.nabble.com,2006:post-7580579Finding fuzzy string duplicates2015-06-12T12:00:38Z2015-06-12T12:00:38Zsrh
Hello everyone,
<br/><br/>I am working on cleaning large panel data set. I have done some cleaning and now have a list of unique company names. However, there are still some inconsistencies in the spelling of company names that lead to separate cases which I would like to match and consolidate under the same name/id.
<br/><br/>I initially tried using reclink in order to match the data to itself. However, when doing this, reclink gives a set of perfect matches (each entry matches perfectly with itself). What I would really like to do is to look for the second-best/non-perfect matches in order to identify misspelled duplicates. To my knowledge, reclink doesn't have an option for ignoring the perfect matches.
<br/><br/>Is there a way to do this with reclink? Or is there another method that would work better?
<br/><br/>Thanks!
<br/>S
tag:statalist.1588530.n2.nabble.com,2006:post-7580578ROCTAB Confidence Intervals2015-06-11T12:20:26Z2015-06-11T12:20:26ZForensicsCrime
Hello,
<br/><br/>I am working to establish the diagnostic accuracy of a tool using ROC curves. However, one of my variables (Criteria) is ordinal 0-6. I would like to get the confidence intervals of sensitivity and specificity for each cutoff score. Below is some data and code:
<br/><br/>Criteria KnownDisease
<br/>1 0
<br/>0 0
<br/>0 0
<br/>0 0
<br/>0 0
<br/>0 0
<br/>0 0
<br/>3 1
<br/>5 1
<br/>3 1
<br/>5 1
<br/>4 1
<br/>6 1
<br/>4 1
<br/>4 1
<br/>5 1
<br/>5 1
<br/>4 1
<br/><br/>roctab KnownDisease Criteria, detail
<br/><br/>* Under detail I would like the confidence intervals of each sensitivity and specificity
<br/><br/>
tag:statalist.1588530.n2.nabble.com,2006:post-7580577Resort furniture2015-04-27T18:41:45Z2015-04-27T18:41:45Zlionhorest
Resort furniture, we will provide a solution for your furniture resort. please see and select resort furniture you need at <a href="http://www.horestco.com" target="_top" rel="nofollow" link="external">http://www.horestco.com</a>. because we specialize in indoor and outdoor wood furniture, stainless steel furniture, custom made furniture, teak furniture and wicker furniture natural and synthetic weaving.
<br/>
<div class="signature weak-color">
<a href="http://www.horestco.com" target="_top" rel="nofollow" link="external">Resort furniture</a><br/>
</div>
tag:statalist.1588530.n2.nabble.com,2006:post-7580573Size effect regression2015-03-12T13:04:28Z2015-03-12T13:04:28Zchicago23
Hello,
<br/><br/>I wanted to see the size effect of my regression so I took my equation and turned all catagorical variables into dummy variables. However, when I ran my regression with the new dummy variables the coef changed on all explanatory variables. Is this normal? Shouldn't all coef stay the same since I'm using all the same outcome, explanatory and controls?
tag:statalist.1588530.n2.nabble.com,2006:post-3593265st: Outreg2 - file cannot be openend2009-09-06T10:13:12Z2009-09-06T10:13:12ZChristian Weiss-2
Dear Statalisters,
<br/><br/>What I intend to do is to run multiple regressions and eventually save
<br/>all the results via outreg2 in an excel file.
<br/><br/>To do so, the do File contains the following outreg2 code (regression
<br/>and estimates store code ommited)
<br/>regresssion 1
<br/>outreg2 [*] using myfile , replace sideway bdec(3) groupvar(Group1
<br/>ebit5yr beta beta_sqr logmarketcap propertyrights regulation
<br/>tcr5hat_sqr Group2 debtofassets capexofassets diversification
<br/>Group3)
<br/><br/>regression2
<br/>outreg2 [*] using myfile, append sideway bdec(3) groupvar(Group1
<br/>ebit5yr beta beta_sqr logmarketcap propertyrights regulation
<br/>tcr5hat_sqr Group2 debtofassets
<br/>> capexofassets diversification Group3)
<br/>file d:\asd.txt could not be opened
<br/><br/>regression3
<br/>. outreg2 [*] using myfile, replace sideway bdec(3) groupvar(Group1
<br/>ebit5yr beta beta_sqr logmarketcap propertyrights regulation
<br/>tcr5hat_sqr Group2 debtofassets
<br/>capexofassets diversification Group3) excel
<br/><br/>The first regression uses outreg "replace" option to start a new/clean
<br/>outreg2 reporting table, the last one the "excel" option to generate
<br/>the .csv file
<br/><br/>However, I frequently receive the error "file myfile.txt could not be
<br/>openend" r(603).
<br/>It appears that the error occurs kind of random and I am not able to
<br/>reproduce it all the time. I already tryied changing the working
<br/>directory, using the full path specification for the file, deleting
<br/>all the files from the working directory. I did not change any
<br/>permissions to directory folders, standard working directory is the
<br/>windows user folder.
<br/><br/>I also tried the outreg2 without the "append" option and wihtout the
<br/>"excel" option, same problem.
<br/><br/>Do you have any suggestions how to yield the intended results?
<br/><br/>Thx a lot
<br/>Chris
<br/><br/>Viele Grüße
<br/>Christian
<br/><br/>*
<br/>* For searches and help try:
<br/>* <a href="http://www.stata.com/help.cgi?search" target="_top" rel="nofollow" link="external">http://www.stata.com/help.cgi?search</a><br/>* <a href="http://www.stata.com/support/statalist/faq" target="_top" rel="nofollow" link="external">http://www.stata.com/support/statalist/faq</a><br/>* <a href="http://www.ats.ucla.edu/stat/stata/" target="_top" rel="nofollow" link="external">http://www.ats.ucla.edu/stat/stata/</a><br/>
tag:statalist.1588530.n2.nabble.com,2006:post-7580568Error message - "invalid file specification "2015-02-12T17:20:48Z2015-02-12T17:20:48Zmohana
Hello Statalist,
<br/><br/>When I run a program based on the " sbe32 package" instructions
<br/>I get the error message "invalid file specification" following the "outbrk"
<br/>command. In an attempt to figure out what was going on, I ran the
<br/>exact code (using the same files) suggested in the Stata example (and are available online). but for given example it's working fine.
<br/>I'd very much appreciate any suggestions.
<br/><br/>Sincerely,
<br/>Mohana
tag:statalist.1588530.n2.nabble.com,2006:post-7580567Two Stage estimation under Heckaman2015-02-07T07:23:02Z2015-02-07T07:23:02ZFasahat Waqar
i want to run stage estimation with 3 choices,probit in 1st stage and OLS in subsequent
<br/>.
<br/>kindly tell me how i can calculate inverse mills ratio and predicated probabilities?
<br/>regards
<br/>FASAHAT
tag:statalist.1588530.n2.nabble.com,2006:post-4827137psmatch2-identifying matched pairs2010-03-30T13:40:20Z2010-03-30T13:40:20ZGarth Rauscher
I apologize if this has been answered before but I could not find the
<br/>solution in the archives. When I perform 1-1 matching using psmatch2,
<br/>several new variables are added to my dataset. Among these are _id, which is
<br/>a unique identifier for each observation, and _n1, which identifies the _id
<br/>for the observation in the matched pair. Id like to be able to define a new
<br/>variable that uniquely identifies each matched pair. In other words if there
<br/>are 100 observations making 50 matched pairs, there are 100 unique values of
<br/>_id and 100 unique values of _n1. Id like a variable with 50 unique values,
<br/>one for each matched pair. Stata doesnt appear to create this variable.
<br/>Having this variable would enable me to conduct stratified analyses
<br/>(stratifying on matched pairs) outside of what is provided by psmatch2.
<br/>Given _id and _n1 there is probably a way to do this but it is beyond my
<br/>skill set. Id appreciate if there are any ideas out there as to how to go
<br/>about defining this variable.
<br/><br/>Thanks very much
<br/><br/>Garth
<br/><br/>Garth Rauscher
<br/>Associate Professor of Epidemiology
<br/>Division of Epid/Bios (M/C 923)
<br/>UIC School of Public Health
<br/>1603 West Taylor Street
<br/>Chicago, IL 60612
<br/>ph: (312)413-4317
<br/>fx: (312)996-0064
<br/>em: <a href="/user/SendEmail.jtp?type=node&node=4827137&i=0" target="_top" rel="nofollow" link="external">[hidden email]</a>
<br/>
<br/><br/><br/><br/>*
<br/>* For searches and help try:
<br/>* <a href="http://www.stata.com/help.cgi?search" target="_top" rel="nofollow" link="external">http://www.stata.com/help.cgi?search</a><br/>* <a href="http://www.stata.com/support/statalist/faq" target="_top" rel="nofollow" link="external">http://www.stata.com/support/statalist/faq</a><br/>* <a href="http://www.ats.ucla.edu/stat/stata/" target="_top" rel="nofollow" link="external">http://www.ats.ucla.edu/stat/stata/</a><br/>
tag:statalist.1588530.n2.nabble.com,2006:post-7580558Significance for non-inferiority using 2x2 data2014-12-19T15:24:10Z2014-12-19T15:24:10Zthomas_hiemstra
Dear Statalist,
<br/><br/>I have 2x2 data from a clinical trial:
<br/><br/>Event vs no event x treatment vs control.
<br/><br/>The hypothesis is non-inferiority with a margin of 12%.
<br/><br/>I use
<br/>- cs event treatment, limit(90) -
<br/>to obtain the risk ratio and 90% CI. But is there any way within Stata to obtain the 1-sided alpha for non-inferiority?
<br/><br/>Thanks for considering,
<br/><br/>Thomas
tag:statalist.1588530.n2.nabble.com,2006:post-4127009st: quaids model2009-12-07T09:16:36Z2009-12-07T09:16:36ZCristiana Rodrigues
Dear,
<br/><br/>I am trying specify one Quaids Model to estimate 18 demand equations, but the - nlsur - model accepts only three equations, if I specify one more or one less, I receive the following error message:
<br/><br/>nlsurquaids returned 102
<br/>verify that nlsurquaids is a function evaluator program
<br/>r(102)
<br/><br/>My model have the form:
<br/><br/>nlsur quaids @ w1 w2 w3 w4 w5 w6 w7 w8 w9 w10 w11 w12 w13 w14 w15 w16 w17 lnp1-lnp18 lnexp anosestudo estudomulher /// adolescented adultod idosod totalpes sexo norte nordeste sul sudeste renda [aw=peso], ifgnls nequations(17) param(a1 a2 a3 a4 a5 ///
<br/>a6 a7 a8 a9 a10 a11 a12 a13 a14 a15 a16 a17 b1 b2 b3 b4 b5 b6 b7 b8 b9 b10 b11 b12 b13 b14 b15 b16 b17 g11 g12 g13 ///
<br/>g14 g15 g16 g17 g18 g19 g110 g111 g112 g113 g114 g115 g116 g117 g22 g23 g24 g25 g26 g27 g28 g29 g210 g211 g212 g213 ///
<br/>g214 g215 g216 g217 g33 g34 g35 g36 g37 g38 g39 g310 g311 g312 g313 g314 g315 g316 g317 g44 g45 g46 g47 g48 g49 g410 ///
<br/>g411 g412 g413 g414 g415 g416 g417 g55 g56 g57 g58 g59 g510 g511 g512 g513 g514 g515 g516 g517 g66 g67 g68 g69 g610 ///
<br/>g611 g612 g613 g614 g615 g616 g617 g77 g78 g79 g710 g711 g712 g713 g714 g715 g716 g717 g88 g89 g810 g811 g812 g813 ///
<br/>g814 g815 g816 g817 g99 g910 g911 g912 g913 g914 g915 g916 g917 g1010 g1011 g1012 g1013 g1014 g1015 g1016 g1017 ///
<br/>g1111 g1112 g1113 g1114 g1115 g1116 g1117 g1212 g1213 g1214 g1215 g1216 g1217 g1313 g1314 g1315 g1316 g1317 ///
<br/>g1414 g1415 g1416 g1417 g1515 g1516 g1517 g1616 g1617 g1717 l1 l2 l3 l4 l5 l6 l7 l8 l9 l10 l11 l12 l13 l14 l15 l16 l17) nolog
<br/><br/>I need to specify the 18 equations. Also, I need to include other explanatory variables in my model, beyond logarithm of price and logarithm of expenditures. As I have 18 goods, then I have 204 parameters, which were specified in the model, but how should I specify the nparameters () of the explanatory variables that I include in my model? Can anyone help me solve this?
<br/><br/>Thank you in advance for your help
<br/><br/>Cristiana
<br/><br/><br/> ____________________________________________________________________________________
<br/>Veja quais são os assuntos do momento no Yahoo! +Buscados
<br/><a href="http://br.maisbuscados.yahoo.com" target="_top" rel="nofollow" link="external">http://br.maisbuscados.yahoo.com</a><br/><br/>*
<br/>* For searches and help try:
<br/>* <a href="http://www.stata.com/help.cgi?search" target="_top" rel="nofollow" link="external">http://www.stata.com/help.cgi?search</a><br/>* <a href="http://www.stata.com/support/statalist/faq" target="_top" rel="nofollow" link="external">http://www.stata.com/support/statalist/faq</a><br/>* <a href="http://www.ats.ucla.edu/stat/stata/" target="_top" rel="nofollow" link="external">http://www.ats.ucla.edu/stat/stata/</a><br/>
tag:statalist.1588530.n2.nabble.com,2006:post-7580556Loop according existance of variable label2014-11-28T13:30:25Z2014-11-28T13:30:25Zjfca283
Hi.
<br/>I'm a bit lost with this.
<br/>I try to write the label for the varx.
<br/>But in some of the databases the var label exist and in others doesn't.
<br/>I must make clear that the variable and the label share the same name.
<br/><h3><pre>
cd C:\Users\JC\Bases\Stata\
local filelist: dir . files "*.dta"
foreach file of local filelist {
use "`file'", clear
label drop varx
label define varx 1 "text1" 15 "text15"
label values varx varx
save "`file'", replace
}</pre></h3><br/>The exposed code does run flawless until i encounter a database with the varx not having a label. It stops my code and it says
<br/><h3>value label varx not found</h3>So, summarising:
<br/>Whether there is or not a label for the varx, i just want that from now on the label for the mentioned variable be as i declare in the loop and save the database.
<br/>Thanks for your replies and time.
tag:statalist.1588530.n2.nabble.com,2006:post-7580555create matrix with rows and columns according to variables length?2014-11-22T23:16:43Z2014-11-22T23:16:43Zjfca283
<br/> Hi
<br/> I need to create a loop. The issue is i have 2 variables, X and Y. And both are strings.
<br/> So, how do i create a matrix with the form
<br/> J(lengthX, lengthY, .)
<br/> Besides, i need that the names of the rows and columns be according to the strings from X and Y.
<br/> Using and example.
<br/> The labels of X are 01 05 06 07 08 09 and Y are 01 02 03 04 05 06 08.
<br/> So, i would need to create a matrix with 6 rows and 7 columns, naming the rows
<br/> 01, 05, 06, 07, 08, 09 and the columns 01,02,03,04,05,06,08.
<br/> I know how to filled that matrix, but i have no idea declaring the number of rows or columns for the matrix. Also with the names of each one.
<br/> Using the option destring doesn't suite me because i have labels as 011 and others as 11 which are completely different.
<br/> Thanks for your time and interest.
<br/> Really.
<br/><br/>
tag:statalist.1588530.n2.nabble.com,2006:post-7580554non-recusive svar2014-11-15T18:59:16Z2014-11-15T18:59:16Zsafrin
I am trying to estimate a svar model with non-recursive short run restriction. The model is over identified. But in Stata it is not working. In iteration when it looks for convergence, always it shows that not concave. I am using stata 13. Can anyone please suggest me how can I overcome this problem?
<br/><br/>Thanks in advance.
tag:statalist.1588530.n2.nabble.com,2006:post-3834123st: AW: Graph: Colouring table cells based on conditions or data distribution2009-10-16T00:33:53Z2009-10-16T00:33:53ZStefan.Gawrich
Hi Statalisters,
<br/><br/>many of you may know this feature from MS Excel (from Version 2007 on).
<br/>It's more or less a "choropleth map" for data tables (cell colour
<br/>depends on cell value relative to the table distribution or on
<br/>conditions).
<br/>(see:
<br/><a href="http://msdn.microsoft.com/en-us/library/bb428945%28office.11%29.aspx" target="_top" rel="nofollow" link="external">http://msdn.microsoft.com/en-us/library/bb428945%28office.11%29.aspx</a>)
<br/><br/><br/>In my work I often have to display both a table (to pick data from) and
<br/>a graph (for visual impression) to users and I'm constantly trying to
<br/>combine both. Colouring tables is one way to do this if rules/conditions
<br/>are user-defineable.
<br/><br/>I thought about setting up a coloured twoway table in Stata but that
<br/>doesn't seem to be easy:
<br/><br/>Stata's results window doesn't allow for fancy tricks so the table must
<br/>be in a graph.
<br/><br/>Spmap by Maurizio Pisati has all colouring options I need and can be set
<br/>up with rectangular polygons and polygon centroids for labeling.
<br/>But it's not written for that purpose, so it's difficult to set up:
<br/>- a n*m polygon-coordinates dataset for each table must be created
<br/>- linkage of table cells to polygons
<br/>- handling of missing values and category "missing"
<br/>- use of value labels
<br/>- display of row and/or column totals (yes/no)
<br/>- ...
<br/><br/>I already started to produce some coloured tables with spmap but these
<br/>are only first tests.
<br/>I'm curious if there is any other graph framework available in Stata
<br/>which also may be suitable for this purpose.
<br/><br/>Best wishes
<br/>Stefan Gawrich
<br/><br/><br/><br/><br/><br/><br/>*
<br/>* For searches and help try:
<br/>* <a href="http://www.stata.com/help.cgi?search" target="_top" rel="nofollow" link="external">http://www.stata.com/help.cgi?search</a><br/>* <a href="http://www.stata.com/support/statalist/faq" target="_top" rel="nofollow" link="external">http://www.stata.com/support/statalist/faq</a><br/>* <a href="http://www.ats.ucla.edu/stat/stata/" target="_top" rel="nofollow" link="external">http://www.ats.ucla.edu/stat/stata/</a><br/>
tag:statalist.1588530.n2.nabble.com,2006:post-7580552Exporting a three-way table in excel or csv?2014-09-09T14:31:04Z2014-09-09T14:31:04Zjfca283
Hi
I hope you can help me.
I need to export multiple tables of this kind:
<pre>table varx1 sex education [iweight = fexp], c(mean ing_t_p) center row col format(%9,0fc)</pre>
I've tried with tabout, but i didn't manage very well.
The option of using the log file didn't work well.
I cut my tables when i tried to do the common copy/paste process.
Do you know of some way to perform the described task but not slicing the table and not doing major adjustment as "text to columns" in excel?
I did know (in fact, i use it quite often) tabout, but the three-way table isn't implemented yet.
Thanks for your interest and time.
Really.
tag:statalist.1588530.n2.nabble.com,2006:post-7580551Help with matrix and loop2014-09-02T22:36:51Z2014-09-02T22:36:51Zjfca283
Hi
<br/>I hope you will understand my problem.
<br/>I wrote this code
<br/><pre>
matrix A=J(17,8,0)
forval i=1/17 {
forval j=0/7 {
quietly summarize var8 if branch== `i' & class== `j' [iw=expansion]
matrix A[`i' , `j'+1] = r(sum_w)
}
}</pre>The matrix A has 17 rows and 8 columns because branch starts from 1 to 17 and class from 0 to 8.
<br/>So, i'm storing the r(sum_w) for every match.
<br/>But know i need to expand the
<br/>quietly summarize var8 if branch== `i' & class== `j' [iw=expansion]
<br/>to
<br/>quietly summarize var8 if branch== `i' & class== `j' area=`iregion' [iw=expansion]
<br/>with area, a variable designing areas in numbers that goes from 1 to 15.
<br/>But i don't know how to store the values in the matrix.
<br/>I thought of generating 15 matrices names, and inserting a forval iregion=1/15, but i didn't work.
<br/>Can you guide me on this?
<br/>Thanks for your time and interest.
tag:statalist.1588530.n2.nabble.com,2006:post-7580550Wealth score using principal component analysis (PCA)2014-08-23T00:26:52Z2014-08-23T00:26:52ZPriyanka Malla
Hi!
<br/><br/>I have created a wealth index as recommended by Filmer and Pritchett after using the pca I predicted pc1 and pc2 using the following command:
<br/><br/>-predict pc1 pc2,score
<br/><br/>My questions are:
<br/><br/>1. How do I score the predicted pc1 for each indicator variables?
<br/>I used the syntax score data but I get an error score not valid after pca
<br/><br/>I also wanted to construct quintiles and used the following command:
<br/><br/> -xtile quintile = pc1 [w=iweallth], nq(3)
<br/><br/>-bysort quintile: summarize EW_cement EW_mud EW_wood EW_bamboo EW_unbakedbrks EW_othmaterial EW_nowall RF_straw RF_mud RF_wood RF_iron RF_cement RF_tile RF_other T_flush T_septic T_noflush T_notoilet L_electricity L_solar L_biogas L_kerosene Lother FS_wood FS_dung FS_leaves FS_cyln_gas FS_kerosene FS_biogas FS_other WS_piped WS_covered_well WS_handpump WS_openwell WS_spring_water
<br/><br/>2. Is this command correct?
<br/><br/>Thank You!!
<br/>
tag:statalist.1588530.n2.nabble.com,2006:post-7580549Combined Pearson/Spearman Correlation Matrix with significance2014-08-08T11:01:04Z2014-08-08T12:49:12Zeternity1
Dear all,
<br/><br/>According to the code from the following post in this forum:
<br/><a href="http://hsphsun3.harvard.edu/cgi-bin/lwgate/STATALIST/archives/statalist.1401/date/article-353.html" target="_top" rel="nofollow" link="external">Combined Pearson/Spearmn Correaltion Matrix</a><br/><br/>I am trying to calculate the combined Pearson and Spearman Correlation matrix.
<br/>I am using -corr- and -spearman-, which works fine. But with -corr- I cannot get the significance (p-value or stars and so on). So I tried -pwcorr-,-list- which gives me the exact results but with significance.
<br/><br/>Using this "new combination" I cannot create the matrix as with -corr-. //Get Pearson Matrix corr var1 var2 var3 matrix R = r(C)
<br/><br/>//Get Row and Column Names
<br/>local rnames : rownames R
<br/>local cnames : colnames R
<br/><br/>//Get Spearman Rank Matrix
<br/>spearman var1 var2 var3, matrix star(0.05)
<br/>matrix S = r(Rho)
<br/><br/>//Convert Pearson Matrix to Mata Matrix
<br/>mata: mataR = st_matrix("R")
<br/><br/>//Convert Spearman Rank Matrix to Mata Matrix
<br/>mata: mataS = st_matrix("S")
<br/><br/>//Clone Mata Pearson Matrix for Combined mataRS Mata Matrix
<br/>//Pearsson and Spearman Rank Matrix in Mata
<br/>mata: mataRS = mataR
<br/><br/>//Replace Pearson r with Spearman rho in Top Half of Combined mataRS Mata Matrix
<br/>mata: mataRS[1,2] = mataS[2,1]
<br/>mata.... and so on.
<br/><br/>//Display Pearson, Spearman Rank, and combined Matrices in Mata
<br/>mata: mataR
<br/>mata: mataS
<br/>mata: mataRS
<br/><br/>//Convert combined mataRS Mata Matrix to Stata Matrix RS
<br/>mata: st_matrix("RS", mataRS)
<br/>matrix rownames RS = `rnames'
<br/>matrix colnames RS = `cnames'
<br/><br/>//Display combined Stata Matrix RS
<br/>matlist RS, format(%8.4f)
<br/><br/>When I replace -corr- with -pwcorr-,-list- I get the following error after using: mata: mataRS[1,2] = mataS[2,1]
<br/><br/><istmt>: 3301 subscript invalid" for the command
<br/><br/>Is there a "smart" way to solve this? (btw, I am working with TeXMaker, so it would be great if the output can be transferred for Latex.)
<br/><br/>Thank you very much.
<br/><br/>--
tag:statalist.1588530.n2.nabble.com,2006:post-7580548Panel data and controlling variables2014-08-04T08:30:24Z2014-08-04T08:30:24Zinesf
Good afternoon,
<br/><br/>i have a dataset for the following regression, for example : dummy_age=set of dummy_characteristics + economic variables (GDP, pop, unemployment).
<br/><br/>The dataset is as following:
<br/><br/>firm_id / entry_year / others
<br/>1 / 2008
<br/>2 /2006
<br/>3 / 1999
<br/>4 / 2006
<br/><br/>entry year is the year which the firm was founded and its among 1986 and 2009. So these values are repeated over the sample.
<br/><br/>My first step was joining the economic variables to the base data, my merging it. The problem here is that Gdp values are repeated because entry years are also repeated.
<br/><br/>When i try to declare the dataset to be panel data (xtset firmid year) i get the result "repeated time values within panel". I also tried to make a regression but stata says that there's no observations.
<br/><br/>Can you help me solving this problem?
tag:statalist.1588530.n2.nabble.com,2006:post-7580547Tabout - Displaying the database used2014-07-31T10:47:52Z2014-07-31T10:47:52Zjfca283
Hi
I hope you can help me
I print the following result with this loop in excel
<i>cd C:\Users\JC\Bases\Stata
local filelist: dir . files "*.dta"
foreach file of local filelist {
use "`file'", clear
dis c(filename)
renvars *,lower
tabout sex education [iw=fact] using "temp.xls", append
}
</i>
But the loop involves over 50 files, so i need to export in the table the name of the database.
Do you know how to do this?
I'm sorry if i didn't exposed my case clearly.
Thanks for your comments and time.
tag:statalist.1588530.n2.nabble.com,2006:post-5745758gllamm, xtmixed, and level-2 standard errors2010-11-16T15:27:02Z2010-11-16T15:27:02ZTrey Causey
Greetings all. I am estimating a two-level, random-effects linear
<br/>model. I know that gllamm is not the most computationally efficient
<br/>option for this, but I am running into some very weird problems. I
<br/>have ~21,000 individuals nested in 16 countries. I have 9
<br/>individual-level predictors (listed as ind1-9) and 2 country-level
<br/>predictors (listed as c1 and c2). When I estimate the model using
<br/>gllamm, here are my results:
<br/><br/>. gllamm DV ind1 ind2 ind3 ind4 ind5 ind6 ind7 ind8 ind9 c1 c2,i(id)
<br/>adapt nip(16)
<br/><br/>Running adaptive quadrature
<br/>Iteration 0: log likelihood = -22865.024
<br/>Iteration 1: log likelihood = -22841.735
<br/>Iteration 2: log likelihood = -22807.82
<br/>Iteration 3: log likelihood = -22797.118
<br/>Iteration 4: log likelihood = -22794.274
<br/>Iteration 5: log likelihood = -22792.672
<br/>Iteration 6: log likelihood = -22791.582
<br/>Iteration 7: log likelihood = -22791.557
<br/>Iteration 8: log likelihood = -22791.428
<br/>Iteration 9: log likelihood = -22791.426
<br/><br/><br/>Adaptive quadrature has converged, running Newton-Raphson
<br/>Iteration 0: log likelihood = -22791.426 (not concave)
<br/>Iteration 1: log likelihood = -22791.426 (not concave)
<br/>Iteration 2: log likelihood = -22789.86
<br/>Iteration 3: log likelihood = -22789.371
<br/>Iteration 4: log likelihood = -22788.767
<br/>Iteration 5: log likelihood = -22788.613
<br/>Iteration 6: log likelihood = -22788.604
<br/>Iteration 7: log likelihood = -22788.604
<br/><br/>number of level 1 units = 21360
<br/>number of level 2 units = 16
<br/><br/>Condition Number = 433.81863
<br/><br/>gllamm model
<br/><br/>log likelihood = -22788.604
<br/><br/>------------------------------------------------------------------------------
<br/> DV | Coef. Std. Err. z P>|z| [95% Conf. Interval]
<br/>-------------+----------------------------------------------------------------
<br/> ind1 | -.0020515 .000392 -5.23 0.000 -.0028198 -.0012833
<br/> ind2 | -.3839988 .010841 -35.42 0.000 -.4052468 -.3627508
<br/> ind3 | -.079134 .0113476 -6.97 0.000 -.1013749 -.0568931
<br/> ind4 | .0800358 .0109386 7.32 0.000 .0585966 .101475
<br/> ind5 | .0468417 .0048978 9.56 0.000 .0372423
<br/>.0564411
<br/> ind6 | .1685022 .0149735 11.25 0.000 .1391546 .1978497
<br/> ind7 | -.2057474 .0171485 -12.00 0.000 -.2393579 -.1721368
<br/> ind8 | -.093775 .0094251 -9.95 0.000 -.1122479 -.0753021
<br/> ind9 | -.0080367 .0021554 -3.73 0.000 -.0122613 -.0038122
<br/> c1 | .762577 .0802034 9.51 0.000 .6053813 .9197727
<br/> c2 | .1763846 .0664327 2.66 0.008 .0461789 .3065903
<br/> _cons | 1.265279 .1023452 12.36 0.000 1.064686 1.465872
<br/>------------------------------------------------------------------------------
<br/><br/>Variance at level 1
<br/>------------------------------------------------------------------------------
<br/><br/> .49269203 (.00476915)
<br/><br/>Variances and covariances of random effects
<br/>------------------------------------------------------------------------------
<br/><br/><br/>***level 2 (id)
<br/><br/> var(1): .09866295 (.01101541)
<br/>------------------------------------------------------------------------------
<br/><br/>When I estimate the model using xtmixed or xtreg, the output is
<br/>essentially the same until I get to the country-level predictors; the
<br/>coefficients are slightly different and the standard errors are
<br/>approximately *ten* times smaller:
<br/><br/>. xtmixed DV ind1 ind2 ind3 ind4 ind5 ind6 ind7 ind8 ind9 c1 c2 || id:,mle
<br/>Performing EM optimization:
<br/>Performing gradient-based optimization:
<br/>Iteration 0: log likelihood = -22785.965
<br/>Iteration 1: log likelihood = -22785.965
<br/>Computing standard errors:
<br/>Mixed-effects ML regression Number of obs = 21360
<br/>Group variable: id Number of groups = 16
<br/> Obs per group: min = 730
<br/> avg = 1335.0
<br/> max = 2875
<br/><br/> Wald chi2(11) = 2296.06
<br/>Log likelihood = -22785.965 Prob > chi2 = 0.0000
<br/>------------------------------------------------------------------------------
<br/> DV | Coef. Std. Err. z P>|z| [95% Conf. Interval]
<br/>-------------+----------------------------------------------------------------
<br/> ind1 | -.0020472 .0003917 -5.23 0.000 -.002815 -.0012794
<br/> ind2 | -.3840113 .0108422 -35.42 0.000 -.4052615 -.362761
<br/> ind3 | -.0790874 .0113578 -6.96 0.000 -.1013483 -.0568264
<br/> ind4 | .0799408 .0109411 7.31 0.000 .0584966 .101385
<br/> ind5 | .0468955 .0048961 9.58 0.000 .0372994 .0564916
<br/> ind6 | .1686695 .0149734 11.26 0.000 .1393222 .1980167
<br/> ind7 | -.2054921 .0172501 -11.91 0.000 -.2393018 -.1716824
<br/> ind8 | -.0941011 .0093698 -10.04 0.000 -.1124655 -.0757367
<br/> ind9 | -.0079976 .0021584 -3.71 0.000 -.0122279 -.0037672
<br/> c1 | .6718781 .2659761 2.53 0.012 .1505744 1.193182
<br/> c2 | .1812668 .1083347 1.67 0.094 -.0310652 .3935988
<br/> _cons | 1.306302 .2079643 6.28 0.000 .8986998 1.713905
<br/>------------------------------------------------------------------------------
<br/>------------------------------------------------------------------------------
<br/> Random-effects Parameters | Estimate Std. Err. [95% Conf. Interval]
<br/>-----------------------------+------------------------------------------------
<br/>id: Identity |
<br/> sd(_cons) | .2033876 .0363049 .1433454 .2885792
<br/>-----------------------------+------------------------------------------------
<br/> sd(Residual) | .7019342 .0033974 .695307 .7086246
<br/>------------------------------------------------------------------------------
<br/>LR test vs. linear regression: chibar2(01) = 1684.50 Prob >= chibar2 = 0.0000
<br/><br/><br/>This is obviously a big problem for establishing significance. I have
<br/>read previous threads about this problem with xtlogit but have not
<br/>seen it mentioned for linear models nor I have a seen a solution. It
<br/>is not immediately clear to me why the estimates or standard errors
<br/>should differ at all -- as Rabe-Hesketh and Skrondal say in their
<br/>book, gllamm is not as computationally efficient for linear models but
<br/>the results should be essentially the same. I have replicated this in
<br/>Stata 10 and Stata 11.
<br/><br/>Thank you very much.
<br/>Trey
<br/>-----
<br/>Trey Causey
<br/>Department of Sociology
<br/>University of Washington
<br/><br/>*
<br/>* For searches and help try:
<br/>* <a href="http://www.stata.com/help.cgi?search" target="_top" rel="nofollow" link="external">http://www.stata.com/help.cgi?search</a><br/>* <a href="http://www.stata.com/support/statalist/faq" target="_top" rel="nofollow" link="external">http://www.stata.com/support/statalist/faq</a><br/>* <a href="http://www.ats.ucla.edu/stat/stata/" target="_top" rel="nofollow" link="external">http://www.ats.ucla.edu/stat/stata/</a><br/>
tag:statalist.1588530.n2.nabble.com,2006:post-7580545svar long run restrictions problem2014-06-29T04:51:00Z2014-06-29T04:51:00ZDeborah Sy
To the Stata Community,
<br/><br/>I am trying to run a 9-variable SVAR model using long run constraints in lieu of Blanchard and Quah. Using the formula (n^2-n)/2, I am left with 36 constraints. I was able to make 36 restrictions in the matrix but I cannot seem to generate the SVAR result. The following is my log-file:
<br/><br/>svar var1 var2 var3 var4 var5 var6 var7 var8 var9, lreq(lr) lags(1/3)
<br/>With the current starting values, the constraints are not sufficient for identification
<br/>The constraints placed on C are
<br/>1999: [c_1_2]_cons = 0
<br/>1998: [c_1_7]_cons = 0
<br/>1997: [c_1_9]_cons = 0
<br/>1996: [c_2_5]_cons = 0
<br/>1995: [c_2_7]_cons = 0
<br/>1994: [c_2_8]_cons = 0
<br/>1993: [c_3_4]_cons = 0
<br/>1992: [c_3_5]_cons = 0
<br/>1991: [c_3_6]_cons = 0
<br/>1990: [c_3_7]_cons = 0
<br/>1989: [c_3_8]_cons = 0
<br/>1988: [c_3_9]_cons = 0
<br/>1987: [c_4_1]_cons = 0
<br/>1986: [c_4_3]_cons = 0
<br/>1985: [c_4_5]_cons = 0
<br/>1984: [c_4_6]_cons = 0
<br/>1983: [c_4_7]_cons = 0
<br/>1982: [c_5_2]_cons = 0
<br/>1981: [c_5_3]_cons = 0
<br/>1980: [c_5_4]_cons = 0
<br/>1979: [c_5_8]_cons = 0
<br/>1978: [c_5_9]_cons = 0
<br/>1977: [c_6_1]_cons = 0
<br/>1976: [c_6_3]_cons = 0
<br/>1975: [c_6_7]_cons = 0
<br/>1974: [c_6_9]_cons = 0
<br/>1973: [c_7_2]_cons = 0
<br/>1972: [c_7_8]_cons = 0
<br/>1971: [c_8_1]_cons = 0
<br/>1970: [c_8_2]_cons = 0
<br/>1969: [c_8_3]_cons = 0
<br/>1968: [c_8_4]_cons = 0
<br/>1967: [c_8_7]_cons = 0
<br/>1966: [c_9_1]_cons = 0
<br/>1965: [c_9_3]_cons = 0
<br/>1964: [c_9_6]_cons = 0
<br/>These constraints place 36 independent constraints on C
<br/>The order condition requires at least 36 constraints
<br/>r(498);
<br/><br/>I am currently using StataIC 12 for Windows (32-bit).
<br/><br/>Does anyone know what went wrong here? Your help is truly appreciated.
tag:statalist.1588530.n2.nabble.com,2006:post-7580544How to create additional observations per each observation in my dataset?2014-06-17T19:35:09Z2014-06-17T19:38:52Zopka
My dataset reflects single purchases by product category and looks like this:
<br/><br/>product category units
<br/> 1 1
<br/> 2 1
<br/><br/>Given that I have 3 categories of products, I need to show that for each given observation, purchases for other categories were zero in a given time:
<br/><br/>product category units
<br/> 1 1
<br/> 2 0
<br/> 3 0
<br/> 1 0
<br/> 2 1
<br/> 3 0
<br/><br/><br/>In other words, per each product category, can you please help me generate missing product categories with units equal to zero?
<br/><br/>