Tuesday, August 20, 2019
Deposits in Thermal Power Plant Condensers
Deposits in Thermal Power Plant Condensers Abstract: Unexpected fouling in condensers has always been one of the main operational concerns in thermal power plants. This paper describes an approach to predict fouling deposits in thermal power plant condensers by means of support vector machines (SVMs). The periodic fouling formation process and residual fouling phenomenon are analyzed. To improve the generalization performance of SVMs, an improved differential evolution algorithm is introduced to optimize the SVMs parameters. The prediction model based on optimized SVMs is used in a case study of 300MW thermal power station. The experiment result shows that the proposed approach has more accurate prediction results and better dynamic self-adaptive ability to the condenser operating conditions change than asymptotic model and T-S fuzzy model. Keywords: Fouling prediction; Condensers; Support vector machines; Differential evolution 1. Introduction Condenser is one of key equipments in thermal power plant thermodynamic cycle, and its thermal performance directly impacts the economic and safe operation of the overall plant [1]. Fouling of steam condenser tubes is one of the most important factors affecting their thermal performance, which reduces effectiveness and heat transfer capability with time [2, 3]. It is found that the maximum decrease in effectiveness due to fouling is about 55 and 78% for the evaporative coolers and condensers, respectively [2]. As a consequence, the formation of fouling in condenser of thermal power plants has special economic significance [4-6]. Furthermore, it represents the concerns of modem society in respect of conservation of limited resources, for the environment and the natural world, and for the improvement of industrial working conditions [6, 7]. The fouling of heat exchangers is a wide ranging topic coveting many aspects of technology, the designing and operating of condenser must contemplate and estimate the fouling resistance to the heat transfer. The knowledge of the progression and mechanisms of formation of fouling will allow a design of * Manuscript an appropriate fouling mitigation strategy such as optimal cleaning schedule to be made. The most common used models for fouling estimation are the thermal resistance method and heat transfer coefficient method [6-10]. However, the residual fouling of periodic fouling deposition process and the dynamic changes of heat exchanger operating condition are not considered in these models. Consequently, the estimation error of those methods is very large. Artificial Neural Networks (ANNs) are capable of efficiently dealing with many industrial problems that cannot be handled with the same accuracy by other techniques. To eliminate most of the difficulties of traditional methods, ANNs are used to estimate and control the fouling of heat exchanger in recent years. Prieto et al [11] presented a model that uses non-fully connected feedforward artificial neural networks for the forecasting of a seawater-refrigerated power plant condenser performance. Radhakrishnan et al [12] developed a neural network based fouling model using historical plant operating data. Teruela et al [13] described a systematic approach to predict ash deposits in coal-fired boilers by means of artificial neural networks. To minimize the boiler energy and efficiency losses, Romeo and Gareta illustrated a hybrid system that combines neural networks and fuzzy logic expert systems to control boiler fouling and optimize boiler performance in [14]. Fan and Wang proposed diagonal recurrent neural network [15] and multiple RBF neural network [16] based models for measuring fouling in thermal power plant condenser. Although the technique of ANNs is able to estimate the fouling evolution of heat exchanger with satisfaction, there are some problems. The selection of structures and types of ANNs dependents on experience greatly, and the training of ANNs are based on empirical risk minimization (ERM) principle [18], which aims at minimizing the training errors. ANNs therefore face some disadvantages such as over-fitting, local optimal and bad generalization ability. Support vector machines (SVMs) are a new machine learning method deriving from statistical learning theory [18, 19]. Since later 1990s, SVMs are becoming more and more popular and have been successfully applied to many areas such as handwritten digit recognition, speaker identification, function approximation, chaotic time series forecasting, nonlinear control and so on [20-24]. Established on the theory of structural risk minimization (SRM) [19] principle, compared with ANNs, SVMs have some distinct advantages such as globally optimal, small sample-size, good generalization ability and resistant to the over-fitting problem [18-20]. In this paper, the use of SVMs model is developed for the predicting of a thermal power plant condenser. The prediction model was used in a case study of 300MW thermal power station. The experiment result shows that the prediction model based on SVMs is more precise than thermal resistance model and other methods, such as T-S fuzzy model [17]. Moreover, to improve the generalization performance of SVMs, an improved differential evolution algorithm is introduced to optimize the parameters of SVMs. 2. Periodic fouling process in condenser The accumulation of unwanted deposits on the surfaces of heat exchangers is usually referred to as fouling. In thermal power station condensers, fouling is mainly formed inside the condenser tubes, reducing heat transfer between the hot fluid (steam that condenses in the external surface of the tubes) and the cold water flowing through the tubes. The presence of the fouling represents a resistance to the transfer of heat and therefore reduces the efficiency of the condenser. In order to maintain or restore efficiency it is often necessary to clean condensers. The Taprogge system has found wide application in the power industry for the maintenance of condenser efficiency, which is one of on-line cleaning systems [6]. When the fouling accumulation in condensers reached a threshold, the sponge rubber balls cleaning system is activated, slightly oversized sponge rubber balls continuously passed through the tubes of the condenser by the water flow, and the fouling in the condenser is reduced or eliminated. The progresses of fouling accumulating and cleaning continue alternatively with time. Therefore, the fouling evolution in power plant condensers is periodic. However, the sponge rubber ball system is only effective of preventing the accumulation of waterborne mud, biofilm formation, scale and corrosion product deposition [6]. As for some of inorganic materials strongly attached on the inside surface of tubes, e.g. calcium and magnesium salts, can not be effectively reduced by this technique. As a result, at the end of every sponge rubber ball cleaning period, there still exist a lot of residual fouling in the condensers, and the residual fouling will be accumulated continuously with the time. Where, the fouling can be cleaned by the Taprogge system is called soft fouling, and those can not be cleaned residual fouling is called hard fouling. When the residual fouling accumulated to some degree, the cleaning techniques that can eliminate them, such as chemistry cleaning method, should be used. Generally, the foul degree of heat exchanger is expressed as fouling thermal resistance, defined as the difference between rates of deposition and removal [6]. In this paper, the corresponding fouling thermal resistance of soft fouling and hard fouling expressed as Rfs and Rfh, respectively. Then, the condenser fouling thermal resistance Rf in any time is the sum of soft fouling thermal resistance and hard fouling thermal resistance, expressed as Eq. (1). ( ) ( ) ( ) ( ) ( ) ( ) 0 0 0 R t R t R t R t R t t R t t f fs fh f fs fh ? ? ? ? ? ? ? (1) where ( ) 0 R t f is the initial fouling. Fig. 1 periodic fouling evolution in power plant condensers Fig. 1 demonstrates the periodic evolution process of fouling in power plant condensers. In fact, the evolution process of fouling in a condenser is very complex, which is related to a great number of variables, such as condenser pressure, cooling water hardness, the velocity of the circulating water and the corresponding inlet and outlet temperatures, the non-condensing gases present in the condenser, and so on. The Rfs(t) and Rfh(t) expressed a very complex physical and chemical process, their accurate mathematic models are very hard to be obtained. Hence, measurement and prediction of fouling development is a very difficult task. Since the fouling evolution process is a very complex nonlinear dynamic system, the traditional techniques based on mathematic analysis, i.e. asymptotic fouling model, are not efficient to describe it [11]. SVMs, as a small sample method to deal with the highly nonlinear classification and regression problems based on statistic learning theory, is expected to be able to reproduce the nonlinear behavior of the system. 3. SVMs regression and parameters 3.1 SVMs regression SVMs are a group of supervised learning methods that can be applied to classification or regression. SVMs represent an extension to nonlinear models of the generalized portrait algorithm developed by Vladimir Vapnik [18]. The SVMs algorithm is based on the statistical learning theory and the Vapnik-Chervonenkis (VC) dimension introduced by Vladimir Vapnik and Alexey Chervonenkis [19]. Here, the SVMs regression is applied to forecast the fouling in power plant condensers. Let the given training data sets represented as ?( , ), ( , ), , ( , )? 1 1 2 2 n n D ? x y x y ? ? ? x y , where d i x ? R is an input vector, y R i ? is its corresponding desired output, and n is the number of training data. In SVMs, the original input space is mapped into a high dimensional space called feature space by a nonlinear mapping x ? g(x) . Let f (x) be the SVM outputs corresponding to input vector x. In the feature space, a linear function is constructed: f (x) ? wT g(x) ? b (2) where w is a coefficient vector, b is a threshold. The learning of SVMs can be obtained by minimization of the empirical risk on the training data. Where, ? -intensive loss function is used to minimize the empirical risk. The loss function is defined as L? (x, y, f ) ? y ? f (x) ? max(0, y ? f (x) ) e (3) where ? is a positive parameter to allow approximation errors smaller than ? , the empirical risk is ? n i emp i i L x y f n R w 1 ( , , ) 1 ( ) ? (4) Besides using ? -intensive loss, SVMs tries to reduce model complexity by minimizing 2 w , which can be described by slack variables. Introduce variables i ? and i , then SVMs regression is obtained as the following optimization problem: min ? ? ? ? n i i i w C 1 2 ( ? ) 2 1 ? ? (5) s.t. i i i y ? f (x ) ? ? , i i i f (x ) ? y ? ? , i ? , i ? 0 where C is a positive constant to be regulated. By using the Lagrange multiplier method [18], the minimization of (5) becomes the problem of maximizing the following dual optimization problem max ( ? )( ? ) ( , ) 2 1 ( ? ) ( ? ) 1 1 , 1 j j i j n i j i i n i i i i i n i i ? y ? ? ? ? ? ? ? ? ? K x x ? ? ? (6) s.t. ( ? ) 0 1 ? ? ? ? n i i i ? ? ,C = i , i ? =0 where i and i ? are Lagrange multipliers, and kernel ( , ) i j K x x is a symmetric function which is equivalent to the dot product in the feature space. The kernel ( , ) i j K x x is defined as the following. ( , ) ( ) ( ) j T i j i K x x ? g x g x (7) There are some kernels, i.e. polynomial kernel K(x, y) ? (x ? y ? 1) d and hyperbolic tangent kernel ( , ) tanh( ( ) ) 1 2 K x y ? c x ? y ? c can be used. Where the Gaussian function is used as the kernel. ) 2 ( , ) exp( 2 2 ? x y K x y ? ? ? (8) Replacing i i i ? ? ? ? ? ? and relation 0 ? ? ? ? i , then the optimization of (6) is rewritten as max ( , ) 2 1 1 1 , 1 j i j n i j i n i i i n i i ? y ? ? ? ? ? K X X ? ? ? ? ? (9) s.t. 0 1 ? ? ? n i i ? ,C ? i ? ? ? C The learning results for training data set D can be derived from equation (9). Note that only some of coefficients i ? are not zeros and the corresponding vectors x are called support vectors (SV). That is, only those vectors whose corresponding coefficients i i are not zero are SV. Then the regression function is expressed as equation (10). f x K x x b i j p i i i ? ? ? ( ) ( ? ) ( , ) 1 ? ? (10) It should be noted that p is the numbers of SV, and the constant b is expressed as ? ? ? ? ? ? ? ? ? ? ? ? ? ? ? ? ? ? ? ? ? ? ? ? p i i i i i p i i i i i b y K x x y K x x 1 1 min ( ? ) ( , ) max ( ? ) ( , ) 2 1 ? ? ? ? (11) 3.2 SVM parameters The quality of SVMs models strongly depends on a proper setting of parameters and SVMs approximation performance is sensitive to parameters [25, 26]. Parameters to be regulated include hyper-parameters C, ? and kernel parameter? , if the Gaussian kernel is used [25]. The values of C, ? and ? are relate to the actual object model and there are not fixed for different data set. So the problem of parameter selection is complicated. The values of parameter C, ? and ? affect model complexity in a different way. The parameter C determines the trade-off between model complexity and the tolerance degree of deviations larger than ? . The parameter? controls the width of the ? -insensitive zone and can affect the numbers of SV in optimization problem. The kernel parameter? determines the kernel width and relates to the input range of the training data set. Here, parameters selection is regarded as compound optimization problem and an improved differential evolution algorithm is proposed to select suitable parameters value. 4. Improved Differential Evolution Differential evolution (DE) algorithm is a simple but powerful population-based stochastic search technique for solving global optimization problems [27]. DE has three operations: mutation, crossover and selection. The crucial idea behind DE is a scheme for generating trial vectors. Mutation and crossover are used to generate trial vectors, and selection then determines which of the vectors will survive into the next generation. The original DE algorithm is described in the following briefly. 4.1 Basic differential evolution Let S ? Rn be the search space of the problem under consideration. Then, the DE algorithm utilizes NP, n-dimensional vectors X x x xt S i NP in t i t i t i ( , , , ) , 1,2, , 1 2 ? ? ? ? ? as a population for each generation of the algorithm. t denotes one generation. The initial population is generated randomly and should cover the whole parameter space. In each population, two operators, namely mutation and crossover, are applied on each individual to yield a trial vector for each target vector. Then, a selection phase takes place to determine the trial vector enters the population of the next generation or not. For each target individual t i X , a mutant vector { 1 , , 1} 1 ?1 ? ? t ? n t t i V v ? v is determined by the following equation. ( ) 1 2 3 1 t r t r t r t i V ? ? X ? F ? X ? X (12) Where F ? 0 is a real parameter, called mutation constant, which controls the amplification of the difference vector ( ) 2 3 t r t r X ? X to avoid search stagnation. According to Storn and Price [27], the F is set in (0, 2]. 1 r , 2 r , 3 r are indexes, randomly selected from the set {1,2,, NP} . Note that indexes must be different from each other and from the running index i so that NP must be a least four. Following the mutation phase, the crossover (recombination) operator is applied on the population. For each mutant vector t ?1 i V , a trial vector { 1 , , 1} 1 ?1 ? ? t ? n t t i U u ? u is generated, using the following scheme. ? ? ? ? ? ? ? ? , ( ) ( ) 1 , ( ) ( ) 1 x rand j CR and j randn i v rand j CR or j randn i u t ij t t ij ij (13) Where j=1, 2, ?, n. rand( j) is the jth evaluation of a uniform random number generator within [0, 1]. CR is a crossover probability constant in the range [0, 1], which has to be determined previously by the user. randn(i) ? (1,2,,n) is a randomly chosen index which ensures that t ?1 i U gets at least one element from t ?1 i V . Otherwise, no new parent vector would be produced and the population would not alter. To decide whether the trial vector t ?1 i U should be a member of the population comprising the next generation, it is compared to the corresponding target vector t i X , and the greedy selection strategy is adopted in DE. The selection operator is as following. ? ? ? ? ? ? , otherwise 1 , ( 1 ) ( ) 1 t i t i t i t t i i X U f U f X X . (14) 4.2 Modification of Mutation From the mutation Eq. (12) we can see that in the original DE three vectors are chosen at random for mutation and the base vector is then chosen at random within the three, which has an exploratory effect but it slows down the convergence of DE. In order to accelerate the convergence speed, a modified mutation scheme is adopted. The randomly selected three vectors for mutation are sorted by ascending in terms of the fitness function value. The tournament best vector is t tb x , the better vector is t tm x and the worst vector is t tw x . For speeding up convergence, the base vector in the mutation equation should select t tb x , and the direction of difference vector should direct to t tm x , that is to choose ( t ) tw t tm x ? x as the difference vector. Then the new modified mutation strategy is as following Eq. (15). 1 ( t ) tw t tm t tb t i v ? ? x ? F ? x ? x . (15) After such modification, this process explores the region around each t tb x in the direction of ( t ) tw t tm x ? x for each mutated point. The mutation operator is not random search any more, but a determinate search. However, the vectors for mutation are selected randomly in the population space, so in the whole evolutionary process it is still a random search, which can ensure the global optimization performance of the algorithm [28]. 5 Optimization procedures of IDE for SVMs 5.1 Objective function The objective of SVMs parameters optimization is to minimize deviations between the outputs of training data and the outputs of SVMs. Where, the mean square error (MSE) is used as the performance criterion. 2 1 1 ( ( , ))2 1 ? ? ? ? ? ? ? ? K k k k y f x w K Obj (16) Where K is the number of training data, k y is the output of the kth training data, and f (x ,w) k is the output of SVMs correspond to input k x . Then the objective of the IDE is to search optimal parameter C, ? and ? to minimize Obj: min F(C,? ,? ) ? minObj (17) Generally, the search range of these parameters is C? [1, 1000], ? ? (0, 1], ? ? (0, 0.5]. For special problem, the search range is changeable. 5.2 Optimization procedures The searching procedures of the improved differential evolution (IDE) for optimization of SVMs parameters are shown as below. Step1: Input the training data and test data, select the Gaussian kernel function. Step2: Specify the number of population NP, the difference vector scale factor F, the crossover probability constant CR, and the maximum number of generations T. Initialize randomly the individuals, i.e. C, ? and ? , of the population and the trial vector in the given searching space. Set the current generation t=0. Step3: Use each individual as the control parameters of SVMs, train the SVMs using training data. Step4: Calculate the fitness value of each individual in the population using the objective function given by equation (17). Step5: Compare each individual?s fitness value and get the best fitness and best individual. Step6: Generate a mutant vector according to equation (15) for each individual. Step7: According to equation (13), do the crossover operation and yield a trial vector. Step8: Execute the selection operation in terms of equation (14) and generate a new population. Step9: t=t+1, return to Step3 until to the maximum number of generations. 6 Case study 6.1 Fouling prediction scheme The formation and development of fouling in condensers is influenced not only by cooling water hardness and turbidity but also by working conditions of condensers, such as velocity of the cooling water and the corresponding inlet and outlet temperatures, the saturation temperature of steam under entrance pressure of condenser, the non-condensing gases present in the condenser, and so on. According to the previous analysis of periodic fouling process of power plant condensers, the fouling can be classified as soft fouling and hard fouling. Therefore, two SVMs models are developed to forecast thermal resistance of soft and hard fouling, respectively. Then, the whole prediction fouling thermal resistance ( f R? ) in condenser is the sum of output of soft fouling prediction model ( fs R? ) and output of hard fouling prediction model ( fh R? ). Generally, the evolution of soft fouling is determined by the velocity (v), turbidity (d), inlet (Ti) and outlet temperatures (To) of cooling water, saturation temperature of steam under entrance pressure of condenser (Ts), and prediction time range (Tp) (the running time in a sponge rubber ball cleaning period). Therefore, these variables are chosen as inputs of the soft fouling thermal resistance predictive model. As for hard fouling of the class of calcium and magnesium salts, it is related to the residual fouling at the beginning and the end of previous sponge rubber ball cleaning period (corresponding thermal resistance is Rfb,n-1, Rfe,n-1, respectively), hardness of cooling water (s), saturation temperature of steam under entrance pressure of condenser (Ts), and the accumulating running time of condenser (Ta). Hence, those variables are chosen as the inputs of hard fouling thermal resistance prediction model. The soft and hard fouling prediction model based on SVMs illustrated in Fig. 2 and Fig. 3, respectively. ( , ) 1 K x x ( , ) 2 K x x ( , ) p?1 K x x ( , ) p K x x 1 1 2 2 1 1 ? ? ? ? p p ? ? p p ( , ) 1 K x x ( , ) 2 K x x ( , ) p?1 K x x ( , ) p K x x S b Ts 1 1 2 2 1 1 ? ? ? ? p p ? ? p p Ta Rfh Rfb,n-1 R fe,n-1 ? ^ Fig. 2 Soft fouling prediction model Fig. 3 Hard fouling prediction model The parameters of the two prediction models are optimized by the IDE algorithm. Fig. 4 illustrates the fouling prediction model using SVMs optimized by IDE. ? Fig. 4 fouling prediction model based on SVMs optimized by IDE 6.2 Experiment results In this section, experiments on N-3500-2 condenser (300MW) in Xiangtan thermal power plant are carried out to prove the effectiveness of the proposed approach. The cooling water of this plant is river water that pumped from the Xiangjiang river. The Taprogge systems are installed in the plant to on-line clean the condensers. At present, the condenser is cleaned every two days using the Taprogge system, and every cleaning time is about 6 hours. Obviously, the fitted cleaning period is not optimal, because the fouling accumulating process is dynamic changing with the operating conditions changing. The experiment system consists of sensors for operating condition parameters measuring, data acquisition system, PC-type computer, etc. A set of 1362 real-time running data in different operating conditions in 84 cleaning periods is collected to train and optimize the SVMs model for fouling prediction, another set of 300 data is chosen for model verification. The proposed IDE is used to optimize the SVMs parameters. The control parameters of IDE are the following. The number of population is 30, the crossover probability constant CR is 0.5, the mutation factor F is 0.5, and the maximum number of generations is 100. The selection of above parameters is based on the literature [27] and [28]. After application of IDE, the optimal SVMs parameters of soft fouling prediction model are C=848, ? =0.513, ? =0.0117, the optimal SVMs parameters of hard fouling prediction model are C=509, ? =0.732, ? =0.0075. The velocity, turbidity, and inlet temperature of cooling water is different in summer and winter, the evolution of fouling in condensers is also different in the two seasons. In the experiments, four sponge rubber ball cleaning periods in different seasons are investigated. Among them, three periods, i.e. the first, 18th and 40th period, are in summer, and the other period is in winter. The hardness and turbidity of cooling water is 56mg/L and 17mg/L in summer, and is 56mg/L and 29mg/L in winter. To demonstrate the effectiveness of the proposed approach, the comparison between the SVMs model, T-S fuzzy logic model [17] and asymptotic model is considered. The asymptotic model is obtained by probability analysis method, and the corresponding expression is the following [17]. ( ) ? 41.3?[1? ?(t ?1.204) /14.57 ] f R t e (17) Table 1 and Table 2 show the fouling thermal resistance prediction results of the above three models in the first and the 18th cleaning periods, respectively. From the Table 1 and Table 2, we can see that compared with tradition asymptotic model and T-S fuzzy logic model, the SVMs based prediction model has higher prediction precision. Fig. 5 and Fig. 6 show the predicted fouling thermal resistance evolution based on the optimized SVMs model and asymptotic model. Fig.6 clearly shows that the asymptotic model is not able to forecast the fouling evolution process at the beginning stage of the 18th cleaning period, the reason is that the residual fouling in the periodic fouling formation process is not considered in the asymptotic models. Table 1 fouling thermal resistance prediction results in the first cleaning period Running time Tpa (hour) Operating conditions Measuring values Rf (K.m2/kW) Prediction values (K.m2/kW) Relative error v(m/s) Ti(?) Ts(?) SVMs model T-S model Asymptotic model SVMs model T-S model Asymptotic model 0 2.0 19.1 33.2 0.0258 0.0260 0.0258 0.62 0 5 2.0 18.5 33.3 0.0995 0.0992 0.1018 0.0947 0.26 2.31 4.82 10 2.0 15.6 31.9 0.2028 0.2037 0.2007 0.1872 0.45 1.04 7.69 15 2.0 14.3 31.6 0.2501 0.2494 0.2411 0.2528 0.27 3.6 1.08 20 2.0 15.5 33.5 0.2865 0.2864 0.2830 0.2993 0.03 1.22 4.48 25 2.0 15.5 34.0 0.3174 0.3172 0.3123 0.3323 0.06 1.61 4.69 30 2.0 16.1 34.8 0.3420 0.3393 0.3321 0.3558 0.79 2.89 4.04 35 2.0 14.4 34.6 0.3567 0.3562 0.3497 0.3724 0.14 1.96 4.40 40 2.0 14.2 34.9 0.3722 0.3736 0.3600 0.3842 0.37 3.28 3.22 Table 2 fouling thermal resistance prediction results of the 18th cleaning period Running time Ta (hour) Operating conditions Measuring values Rf (K.m2/kW) Prediction values (K.m2/kW) Relative error v(m/s) Ti(?) Ts(?) SVMs model T-S model Asymptotic model SVMs model T-S model Asymptotic model 632 2.0 14.0 29.8 0.0774 0.0791 0.074 2.26 0 637 2.0 14.2 30.9 0.1772 0.1773 0.1850 0.0947 0.06 4.40 46.56 642 2.0 12.5 30.4 0.2474 0.2479 0.2438 0.1872 0.21 1.46 24.33 647 2.0 11.9 30.4 0.2898 0.2908 0.2955 0.2528 0.36 1.97 12.77 652 2.0 10.6 30.1 0.3230 0.3222 0.3354 0.2993 0.25 3.84 7.34 657 2.0 11.4 31.5 0.3447 0.3437 0.3525 0.3323 0.28 2.26 3.60 662 2.0 10.2 31.2 0.3655 0.3652 0.3648 0.3558 0.08 0.19 2.65 667 2.0 10.7 32.0 0.3831 0.3815 0.3767 0.3724 0.42 1.67 2.79 672 2.0 11.8 33.5 0.3985 0.3978 0.3912 0.3842 0.18 1.83 3.59 To eliminate the influence of residual fouling and improve the prediction precision, an improved asymptotic models are i
Subscribe to:
Post Comments (Atom)
No comments:
Post a Comment
Note: Only a member of this blog may post a comment.